High-speed multi-objective Fourier ptychographic microscopy

: The ability of a microscope to rapidly acquire wide-field, high-resolution images is limited by both the optical performance of the microscope objective and the bandwidth of the detector. The use of multiple detectors can increase electronic-acquisition bandwidth, but the use of multiple parallel objectives is problematic since phase coherence is required across the multiple apertures. We report a new synthetic-aperture microscopy technique based on Fourier ptychography, where both the illumination and image-space numerical apertures are synthesized, using a spherical array of low-power microscope objectives that focus images onto mutually incoherent detectors. Phase coherence across apertures is achieved by capturing diffracted fields during angular illumination and using ptychographic reconstruction to synthesize wide-field, high-resolution, amplitude and phase images. Compared to conventional Fourier ptychography, the use of multiple objectives reduces image acquisition times by increasing the area for sampling the diffracted field. We demonstrate the proposed scaleable architecture with a nine-objective microscope that generates an 89-megapixel, 1.1 µm resolution image nine-times faster than can be achieved with a single-objective Fourier-ptychographic microscope. New calibration procedures and reconstruction algorithms enable the use of low-cost 3D-printed components for longitudinal biological sample imaging. Our technique offers a route to high-speed, gigapixel microscopy, for example, imaging the dynamics of large numbers of cells at scales ranging from sub-micron to centimetre, with an enhanced possibility to capture rare phenomena.


Introduction
There is an unmet need to record wide-field, high-resolution microscopic images of dynamic events at high frame rates [1][2][3][4]. Examples include subcellular imaging in high-throughput digital pathology, and of rare and dynamic events, such as cell divisions within large in vitro cancer cell cultures. The ability of a microscope to record wide-field, high-resolution images is, however, fundamentally limited by diffraction and optical aberrations [5][6][7]. Diffraction limits the minimum resolvable feature size to λ/(NA obj + NA ill ), where λ is the wavelength of light, and NA obj + NA ill is the sum of the numerical apertures of the objective and illumination. A typical high-resolution microscope with NA obj ∼ 0.9 offers a lateral resolution of ∼ 0.3 µm(λ = 550 nm), but only within a commensurately small depth of field of 0.7 µm [8]. That is, a microscope that is able to resolve sub-cellular features can do so only within a thin layer that is much less than the thickness of the cell [8]. Optical aberrations of high-NA lenses further limit the field of view to typically 0.65 × 0.65 mm, yielding an image with a maximum space-bandwidth product (SBP) (resolution × field-of-view) of around 5-megapixels [5,6,9].
High-SBP images may be constructed in time sequence by stitching together a mosaic of images recorded while stepping the sample through the field of view of a high-resolution microscope. Although this can yield very high SBP, image acquisition is slow and high-cost, high-precision by the sample, each forming distinct low-resolution images on associated detectors. The solid angle subtended by the nine objectives proportionately increases both the optical étendue and instantaneous SBP of image acquisition. More generally, for n parallel objectives capturing images for each of m illumination angles, a total of n · m low-resolution images are recorded. Achieving equivalent SBP with a single-objective FPM would also require n · m images, but at an expense of n · m illumination angles. Compared to MOFPM, a conventional FPM has an n-times larger image acquisition overhead for a given SBP. Notably, there are no fundamental obstacles to the implementation of a full hemispherical array of objectives with a 100% fill factor, analogous to that demonstrated in gigapixel photography [9]. Consequently, the MOFPM architecture enables the maximum possible étendue and an arbitrarily-high SBTP. In ptychographic imaging, the goal is to collect multiple images encoding various spatial frequencies of the sample. Which spatial frequencies go through the pass-band of the optical sensor and get detected depend either on the illumination angle or the position of the aperture. In multi-objective Fourier ptychography a clever combination of illumination angles and aperture positions are used to design an optical system. The addition of multiple apertures enables reduction of illuminations angles used without loss of reconstructed image quality/resolution. Order of magnitude image capture speed improvements can be achieved through parallelised image capture, enabling an experimental configuration providing near-snapshot gigapixel imaging. A picture of our nine-camera experimental prototype is shown on the left, with a CAD design on the right.
Although we report the first demonstration of FPM with multiple objectives used in parallel, the geometry has common features with previous investigations into the feasibility of MOFPM [31] and in wide-field digital holography [32]. In particular, these articles report the scanning of a single objective through the diffraction pattern of the sample to enable the realisation of an increased SBP by time-sequential aggregation of data. Additionally, in conventional single-objective FPM, and in [31,32] all low-resolution images are recorded with identical imaging distortion and aberrations, which enables relatively straightforward Fourier-ptychographic aggregation of the image spectra into a single high-resolution spectrum. Hence, the use of a single objective enables considerable simplification of calibration and image recovery using conventional FPM algorithms. Lastly, in these proof-of-principle experiments, the longitudinal stability of the instrument was never an issue. However, our use of multiple mutually tilted objectives and multiple dissimilar sensors pose significant challenges for computational reconstruction and calibration algorithms, which we have addressed in this manuscript.
In MOFPM, the multiple objectives exhibit dissimilar imaging distortions and optical aberrations that vary substantially between the n low-resolution images due to variations in geometry and manufacturing imperfections. To provide in-focus off-axis imaging across the field of view and to minimize off-axis aberrations, we used the Schleimpflug imaging configuration [33] for the off-axis objectives. Furthermore, the off-axis cameras in MOFPM record darkfield images only, unlike conventional single-objective FPM where both brightfield and darkfield images are present. The lack of high-signal-to-noise information in brightfield images makes aberration recovery and computational convergence extremely challenging, especially in noisy imaging conditions. With improved reconstruction algorithms presented here, we can recover the aberrations of off-axis cameras from darkfield images without the need for additional calibration data, unlike methods in [31]. We also developed new calibration and image-recovery algorithms that compensate for the dissimilar distortions and dissimilar field-dependent aberrations during Fourier-ptychographic image synthesis.
We report a practical demonstration of the MOFPM concept for a nine-objective MOFPM that is able to record a 89-megapixel, 1.1 µm-resolution image in 1 s of image acquisition (although latency in the camera readout electronics increased this time to 3 s in our implementation). To achieve identical SBP without multiplexing and multiple cameras, our setup would require 15× longer image acquisition time of 45 seconds. Such improvement is higher than the 10-fold acquisition time reduction offered by the highest SBPT FPM demonstration [1]. With the experimental design, reconstruction and calibration techniques outlined in this manuscript, we demonstrate the feasibility of scaling this new architecture to high-resolution microscopy with arbitrarily high SBP and SBTP.
In the next section, we introduce the theoretical principle of MOFPM followed by experimental quantification of resolution and demonstration of reconstructed image using a histology sample, and a time-resolved imaging of Dictyostelium cell dynamics. Detailed explanation of the automatic self-calibration and reconstruction algorithms are included in the Supplementary material.

Principle of multi-objective FPM
Given a thin sample with a transmission function o(r) illuminated by a plane wave, the diffraction pattern in the Fourier plane can be expressed as O(k − k i ) [12], where r and k denote space and spatial-frequency co-ordinates respectively. The wave vector k i corresponds to the angular illumination by LED i, which translates the diffraction pattern with respect to the optical system. The translated sample spectrum is intercepted by the objective lens of a microscope, defined by its pupil function P c (k), where the subscript c refers to the "camera" index of a multi-camera system, and optical aberrations in P c (k) are unique to each lens. In MOFPM, the frequency spectrum is intercepted by multiple cameras simultaneously (see Fig. 2) and is low-pass filtered by the aperture to produce a spectrum O(k − k i − k c )P c (k). The wave vector k c indicates the position of each camera with respect to the diffracted spectrum for the given illumination angles. Multiple frequency bands are recorded in parallel, reducing the number of time-sequential illuminations required by a factor equal to the number of cameras. It should be emphasized, that unlike other multi-camera FPM implementations [16], in MOFPM each camera images the same area of the sample. The SBP and resolution is then computationally increased by synthesis of an increased image-space NA. The low-pass filtered spectrum transmitted through each objective pupil is focused onto the corresponding sensor, yielding an intensity image for each camera and illumination angle, given by I i,c (r) (see Supplementary material S2): The additional operator T c , which is not required in conventional, single-camera FP, describes coordinate transformations and image distortion due to a tilted, off-axis Scheimpflug imaging geometry, which varies from camera to camera. The Scheimpflug configuration [33,34] involves tilting sample, lens and detector planes with respect to each other, to minimise defocus and distortion effects. Residual distortions vary from camera to camera and are incorporated into the image-construction algorithm. In single-camera FPM, the phase recovery of the constructed high-bandwidth diffraction pattern at the objective pupil-plane involves the correct phasing of all diffracted fields recorded for each illumination angle. In MOFPM, the co-phasing requirement also requires co-phasing of diffracted fields captured by the multiple cameras. With ptychographic reconstruction algorithms, we can aggregate the diffracted fields coherently in the Fourier domain from intensity-only measurements. While phase-retrieval is inherently ill-posed [20], a stable solution is possible, provided that the multiple diffracted measurements overlap in the Fourier domain. To use existing ptychographic reconstruction algorithms [18,20,21] in MOFPM, the coordinate distortion must be accounted for to remove T c from the image-formation model. While this could in principle could be achieved through careful experimental calibration, we developed a robust and fully automated self-calibration strategy that removes the need for precise multi-camera alignment. We used image-registration algorithms that correct for sensor tilts (which cause perspective distortions) and field-of-view mismatches between the cameras. We also correct for LED-array and aperture/lens displacements of each camera prior to the reconstruction process with an algorithm (described in the Supplementary material S3), based on Fourier ptychographic position-misalignment method [35]. The outlined calibration algorithm enables high-quality image reconstruction using low-cost lenses, low-precision and low-stability 3D-printed components and alignment by hand. Despite the seemingly complicated experimental design, the computational correction of misalignment allows for a relatively simple experimental implementation without the need for high-precision alignment.
Following preprocessing, the MOFPM forward model can be simplified (by eliminating T c ) to that derived in Supplementary Material S3: Apart from variations in P c (k) between the cameras, the forward model is identical to that used in conventional FPM. Consequently, established FPM reconstruction algorithms can be modified and utilized for construction of a broad image spectrum from multiple low-resolution intensity measurements, as shown in Supplementary Material S4. The off-axis cameras typically capture dark-field images representing high-spatial frequencies of the sample, whereas the bright-field images are captured by the on-axis camera only. The lack of bright-field conditions within images and the associated lower signal-to-noise ratio degrades reconstruction convergence, especially without a priori knowledge of optical aberrations. Like in computational calibration, the central camera can act as a "guide star" for image construction by the off-axis cameras and this ensures a robust high-SBP image reconstruction without prior knowledge of optical aberrations or distortions. Lastly, we also utilize LED-multiplexed FPM [1,2], where multiple LEDs are illuminated in parallel during capture of a single image, to provide a further improvement in speed of image acquisition. Each captured image-intensity spectrum then contains multiple overlapping frequency bands, which introduces additional challenges for convergence of computational reconstruction. Nevertheless, LED multiplexing has enabled enhanced frame rates for live-cell imaging using FPM [1]. While both MOFPM and LED-multiplexed FPM aim to improve speed of image acquisition, MOFPM achieves this through parallelization of the detection NA, while illumination multiplexing parallelises the synthesis of the illumination NA. Due to mutual orthogonality between the processes, we are able to demonstrate parallelization of both illumination (through multiplexing) and detection (though increased image-space NA) in the same image acquisition. This combined parallelization offers the fastest possible FPM data capture.

Experimental results
In this section, we describe experimental results obtained with our nine-camera MOFPM system. A single MOFPM frame required for ptychographic reconstruction is regarded as a complete data set recorded by nine cameras for each of the 49 illumination angles to yield 441 unique diffracted spectral bands. The recorded MOFPM dataset is equivalent to conventional acquisition using a single camera and 441 illumination angles (instead of 49), but is a factor nine faster. The image quality can be as high as conventional FPM only if the image reconstruction is not degraded during the fusing of the spectra from the nine dissimilar cameras. Thus, we employ single-camera FPM images as a gold-standard reference to evaluate MOFPM. We show that we can reduce image acquisition time from 45 s to 5 s, without the loss of resolution or reconstruction quality. We also demonstrate a further reduction in acquisition time to 3 s by use of LED-multiplexed MOFPM.
Using the calibration and reconstruction algorithms outlined in the Supplementary Material S3 and S4, the MOFPM reconstruction provides robust convergence in the presence of deviations between the ideal forward model and the experimental implementation. Deviations include chromatic aberration of the microscope objectives, spatially varying illumination intensity and spatially varying aberrations. Moreover, MOFPM does not require knowledge of optical aberrations, instead, aberrations are recovered iteratively together with the complex fields. This is especially useful for calibration of the unique aberrations of all nine cameras. Full-field reconstructions of the Lung Carcinoma sample validate the robustness of our calibration and reconstruction algorithms, which were performed without any a priori aberration knowledge.
Lastly, for imaging live cells, such as Dictyostelium described below, scattering is weak and exhibits negligible contrast under bright-field illumination. This makes reconstruction of phase-only samples much more challenging compared to cells with good amplitude contrast [1,36]. The lower signal-to-noise ratio for weakly-scattering samples also inhibits the use of LED-multiplexing, which is known to be sensitive to Poisson noise. Lastly, given the higher noise due to our low-cost sensors, we did not use LED multiplexing for the weakly scattering sample imaging.

Enhanced image resolution and space-bandwidth-time product using MOFPM
From reconstructed images of a resolution test target, we can quantitively demonstrate a nine-fold greater SBTP using MOFPM without sacrificing image quality. In Fig. 3(a) we show the raw image of a USAF test target recorded with a single-camera microscope (the central camera of our nine-camera array illuminated by 7 × 7 LEDs simultaneously), demonstrating a spatial resolution of 8 µm. Conventional FPM, using a single camera recording for 441 illumination angles yields the reconstruction shown in Fig. 3(b), which exhibits resolution enhancement to 1.1 µm. The enhanced resolution improvement is based on the ability to resolve group 9 element 6, and is in agreement with theoretical calculations for an illumination wavelength of 430 nm. The total data acquisition time is 45 s. We reduced this acquisition time by a factor of nine to 5 s by using nine-fold fewer illumination angles (i.e., 49) together with a nine-camera MOFPM. The reconstructed image is shown in Fig. 3(d) and can be seen to exhibit the same resolution and image quality to the 'gold-standard' single-camera FPM image shown in Fig. 3(a). For reference, we also show in Fig. 3(b) an image recorded in 5 s using 49 illumination angles, which exhibits the expected intermediate image resolution. LED multiplexing enables the number of captured images to be further reduced from 49 to 29, and a reduction in image acquisition time from 5 s to 3 s, while maintaining image quality and resolution, as can be seen by the image shown in Fig. 3(e).

Wide-field histology
In this section, we demonstrate the application of MOFPM for wide-field imaging in histology. Figure 4 shows a reconstructed image of a lung carcinoma sample with a field of view of 5.63 mm × 4.71 mm=26 mm 2 . Given that the maximum resolving power of our microscope is 1.1 µm, this corresponds to a SBP of 88.5-megapixels -a 40× increase over the 2-megapixel SBP of the raw image shown in (Fig. 4(b2,c2,d2)). Since SBP is determined only by the imaged FOV and the reconstructed pixel size (determined by the synthetic NA), SBP calculations are independent of the sample scattering strength [12]. Comparison of ( Fig. 4(c1)) and c3 demonstrates that LED multiplexing can be used to increase acquisition speed without discernible degradation in resolution or image quality. The reduction in acquisition speed to 3 s for the 88.5-megapixel image corresponds to ∼ 30-megapixels per second SBP. This is limited by latency in our cameras. Removal of this latency would enable an increase in frame rate from 10 Hz to 30 Hz and a SBTP of ∼ 90-megapixels per second. Lastly, we also show quantitative phase imaging (QPI) in Fig. 4(c4) as a result of MOFPM reconstruction, corresponding to 630nm illumination. This demonstrates the suitability of MOFPM for quantitative label-free digital pathology application.

Longitudinal imaging of cell dynamics
High frame rate is essential for live-cell imaging, especially for ptychographic imaging techniques, which cannot cope with sample motion throughout data acquisition. We show that MOFPM is sufficiently stable, even using 3D printed components, to enable high-SBTP, wide-field imaging of cell dynamics over several hours. Stability of illumination angles is necessary Fig. 3. A USAF target was imaged for quantitative assessment of resolution and image quality using MOFPM. A conventional microscope image (a) and FPM reconstruction using 441 LEDs (b) and using 49 LEDs (c) illustrate the trade-off between speed of image acquisition and resolution of reconstructed image. With MOFPM (d) we achieve resolution of 1.1 µm -equal to that obtained using 441 LEDs with conventional FPM, but with reduction to only 5 s for data capture due to the use of only 49 LEDs and parallel data capture. LED multiplexing enables image acquisition time can to be reduced even further to <3 s while maintaining the same reconstructed image resolution (e).  sections (b1,c1,d1). The reconstruction quality is significantly improved compared to raw data with 2-megapixel SBP (b2,c2,c3). We also demonstrate compatibility with LED multiplexing (c3) and the possibility of quantitative phase imaging (c4).
for accurate positioning of spatial frequencies during image fusion, while sample and sensor stability ensures the spectral content being sampled matches the theoretical model. Furthermore, high-quality imaging is maintained even when the focus varies during image capture, such as due to evaporation of cell-growth media during imaging (we did not use temperature or humidity controlled sample stages). Our self-calibration algorithms can correct for movement of all mechanical components, and the reconstruction algorithm provides digital re-focusing through recovery of defocus aberration.
We demonstrate MOFPM for capturing video sequences of the collective motion of large numbers of Dictyostelium cells. These social amoebae are a model organism used to study coordinated cell migration and cell differentiation, which -in response to starvation -aggregate and morph into large migratory slugs several millimetres in length [37]. Investigating such cell evolution is a challenge for conventional microscopy, requiring a range of lenses to be used for small-scale cell-to-cell interactions (∼ 60 − 100× magnification, NA ≳ 0.6) and large-scale cell migration (∼ 4× magnification, NA≲ 0.1) [38][39][40]. With MOFPM, we are able to resolve individual cells 5-15 µm in diameter across a wide field of 26 mm 2 .
Reconstruction of an 84-minute time-lapse image sequence of Dictyostelium cells, extracted from a 10-hour sequence, is shown in Fig. 5. These weakly-scattering cells exhibit very low amplitude contrast, and so we show only quantitative phase reconstructions. Raw images in Fig. 5(b1-c1) obtained using illumination from a single LED exhibit high contrast, which would not be the case with illumination from an extended source (for example, when multiple LEDs are illuminated) [41]. With MOFPM it is possible to resolve individual cells and their formation into migratory slugs as can be seen in Fig. 5(b2-c2)) and the linked video sequence (see figure  caption). The extended FoV of 26 mm 2 enables tracking of the movement of large slugs, however, during cell aggregation, the thickness can violate the thin-sample approximation assumed in FPM, which can be overcome with multi-slice reconstruction techniques [42,43]. In summary, with our technique, we were able to successfully demonstrate algorithmic robustness and stability of our calibration algorithms over extended timeframes.

Discussion
In this section, we explain the unique aspects of MOFPM that offer unique enhancements over other high-speed ptychographic imaging techniques, while also highlighting the synergy with existing high-speed imaging that offers a facile route to further increases in SBTP.
The fastest FPM demonstration to date is LED multiplexed FPM [1,2], which was used to collect a complete FPM dataset in 1 second. Image-acquisition time is reduced by simultaneously illuminating the sample with multiple LEDs. Consequently, images corresponding to differing passbands are superimposed at the detector and this is incorporated into the forward model for the FPM reconstruction. While an increased SBTP is achieved, there is a limit on the number of LEDs that can be illuminated in parallel, before image recovery is degraded. We demonstrate below that LED multiplexing can be successfully combined with MOFPM to provide an even greater enhancement in SBTP.
MOFPM and multiplexed FPM both involve redundant sampling of spatial-frequency bands, although for MOFPM, there is the added advantage that redundant passbands are encoded in separate images captured by different cameras. This way, the computational burden is removed because spatial frequency decomposition is performed directly by the use of multiple-cameras prior to the detection process, rather than through complex computations. Moreover, in our scalable architecture, there is no fundamental limit to the number of cameras that can be used in parallel. Most importantly, LED multiplexing does not work well with weakly scattering samples, since the consequent mixing of dark-field and bright-field images leads to the weak dark-field signals being overwhelmed by shot noise from bright-field images [1,20]. MOFPM does not suffer from this limitation, since darkfield and brightfield images are captured by different imaging sensors.
Fortunately, MOFPM and LED multiplexing are mutually complementary, as was demonstrated in Sec. 3. In fact, MOFPM is complementary with all previously reported Fourier ptychographic implementations, overcoming the limitations imposed by the use of a single sensor and/or lens. The MOFPM image-formation model that we describe should be considered as a generalisation of FPM for increasing both the illumination and detection NA, and which can be integrated into the design of various optical systems. Further speed improvement is possible however by ab initio optimisation of optical design specifically for multiplexing, as was demonstrated by data-driven approaches [44,45]. For example, a non-uniform camera arrangement would need to be optimized for a given LED-multiplexing pattern to achieve the fastest, non-redundant diffracted field sampling given the LED multiplexing constraints [2].
Lastly, the MOFPM also has promise for imaging of samples that are optically thick. When the illumination is transmitted through a thin sample, there is a one-to-one relationship between k-space vectors k i and LED position r i . This is no longer valid for a thick sample [42,46,47] due to light refraction proportionality with the illumination angle. It has been demonstrated in aperture-scanning Fourier ptychography [46,47] that diffraction can be accurately modelled by keeping the illumination direction constant while scanning the aperture in the Fourier domain. More generally, MOFPM can be regarded as a combination of both illumination-and aperturescanning FPM. For thin samples, conventional image-reconstruction can be used and for thick samples the multi-camera arrangement can be used to deduce the scattering geometry similar to tomographic imaging. These novel adaptations are left as future work.

Conclusion
Fourier ptychography has demonstrated quite emphatically how computational image construction can reconfigure the problem of wide-field high-resolution microscopy from the design and manufacture of complex high-cost optics to, instead, the computational integration of multiple band-pass images acquired with simple low-cost optics. Because FPM images can be recorded with a single objective of low SBP, the overall cost and complexity can be massively reduced. The quid pro quo however is that the time-sequential construction of high-SBP images requires long image acquisition times, which reduces the attractiveness of FPM in a wide range of applications: from high-throughput digital pathology to imaging of biological dynamics. One route to increased speed is to use lenses and detectors with higher SBP, but the higher cost of these components reduces the cost-benefit of FPM -which is fundamentally its greatest asset.
Multi-objective FPM offers a new scalable architecture for increased speed of acquisition: the SBP of the detector and objective can be optimally selected to meet system requirements and reduced cost. Zheng et al. introduced the concept of synthesising an increased illumination NA from discrete illumination angles in FPM [12]. Our use of multiple objectives achieves an equivalent increase in NA for imaging, but in parallel. MOFPM can thus be considered as a generalisation of the Fourier ptychography concept to both illumination and imaging domains that provides both enhanced SBP and enhanced SBTP.
We have demonstrated that techniques developed previously for calibration of LED illumination and a single objective in conventional FPM can be extended to calibrate also the positions and aberrations of a multi-objective array. Our prototype enables construction of a wide-field (85-megapixel), high-resolution (1.1 µm) image, captured in 3 seconds. This is a dramatic improvement compared to raw images containing 2-megapixel SBP and 8 µm resolution. However, the concept can be scaled to almost arbitrarily high SBP and SBTP.

Optical design
In a regular microscope, the object, lens and image planes are all mutually parallel. The use of multiple objectives means this is no longer possible, but by use of the so-called Scheimpflug configuration [33] it is possible to tilt the lens and image planes such that sharp focus is retained across an extended field. The Scheimpflug condition states that the sample, lens and detector planes must meet at a single point called the Scheimpflug intersection, as illustrated in Fig. 6. This imaging technique was designed for aerial photography to remove perspective distortions and also for corneal imaging, because in both cases either the lens or the imaging sensor is tilted with respect to the sample. Scheimpflug configuration was also suggested for off-axis FPM imaging due to minimised off-axis aberrations [13,31,34]. While it is possible to correct aberrations (e.g., coma, astigmatism and defocus) computationally, this minimisation of defocus reduces the burden on the reconstruction algorithms. Additional advantages include minimised spatially varying magnification and the ability to use a curved lens array (for multi-objective systems) which increases the maximum attainable resolution compared to a planar lens array. Our experimental Scheimpflug-based MOFPM, shown in Fig. 1, employs nine imaging sensors with a curved lens array. Lenses were located in a 3D-printed holder and detector arrays were mounted in three-axis kinematic stages capable of tip-tilt-axial adjustment. Camera holders were manufactured out of aluminium to cope with the heat generated by the cameras. Based on the diagram in Fig. 6 the following set of equations can be derived, for the Scheimpflug criteria to yield a given constant magnification between the cameras: where f is the focal length of the objective lenses, M is the magnification of each camera (microscope) and θ D and θ c are the tilts of the detector of the lens respectively. Lens tilt θ c depends on the position of the lenses r c , which in turn will define the band sampled of each camera. With this prototype we aimed to demonstrate the speed enhancement of a nine-camera MOFPM compared to a single-camera FPM using 441 LEDs. Since illumination angles of the LEDs define the total frequency coverage, each camera must cover frequencies equivalent to 441/9 = 49 LEDs for which we employ an array of 7 × 7 LEDs. Examples of the spectra covered by single and multiple cameras are illustrated in Fig. 6(b). To construct a high-speed MOFPM we select a desired frequency coverage by a given LED array and use the reciprocal relationship from Fig. 6(c-d) to compute the lens positions r c (based on the desired LED positions r i ). This defines separations between the lenses, and in turn the angles θ D and θ c using Eqn. 3. The experimental parameters for our final prototype are summarised in Table 1. LEDs to be used, for which the 32 × 32 Adafruit LED array was used. However, this LED array allows the illumination of only a single LED at a time. For LED multiplexed illumination, which requires simultaneous illumination by multiple LEDs, the Tindie LED array was used. Both LED arrays provide the same light intensity, and they were positioned such that the spatial frequency overlap (60%) remains the same, providing equivalent illumination conditions in both experiments. The total cost of microscope components was estimated to be approximately $6000, which is significantly lower compared to commercial microscope systems used for other high-speed FPM applications [1,2]. Further applications could utilize an array of lower cost cameras, such as the ones used for the $150 FPM-based microscope [14].

Image acquisition
Time-sequential acquisition of a complete dataset required for image reconstruction constitutes a single frame containing 51 images captured for each of the nine cameras: 49 images captured for illumination by each of 49 LEDs, a darkframe captured without any LEDs and one brightfield image used for image registration to enable calibration of for possible microscope drift during imaging. A one-off correction of LED-position misalignment requires ∼ 9 images to be captured for each camera once, prior to longitudinal imaging. Only the 49 brightfield/darkfield images must be captured in quick succession, whereas the remainder can be obtained while cameras are idle (in between longitudinal frame capture). Lastly, all colour images were obtained by capturing separate frames for illumination by red, green or blue LEDs. Stacking these monochrome reconstructions yields a into a single RGB colour images. When all nine cameras are used, the frame rate is reduced from 38 FPS (for an isolated camera) to 10 FPS when cameras are used in parallel. For our experiments, we connected 2 − 3 cameras per USB PCIe card on a standard tabletop computer and used Python scripts for image acquisition. All images were captured at the maximum available frame-rate of 10 FPS, but upgrading of the USB PCIe cards and use of native image-acquisition coding will enable an almost four-fold increase in image acquisition speed.

LED multiplexed image acquisition
In MOFPM, each camera can be considered as a standalone conventional FPM microscope with tilted optical components. Given the equivalence between MOFPM and FPM, the LED array can be multiplexed using the same principles and constraints of conventional FPM: captured diffracted fields should not overlap in spectrum and LED illumination from within the objective NA (resulting in brightfield images) should not be mixed together with LED illumination outside the NA (resulting in darkfield images) [1,2]. In our system only the central camera can capture brightfield images whereas off-axis cameras capture only darkfield. In total, 49 LEDs are used for data acquisition, 9 of which result in brightfield images within the central camera. Since all of the 9 brightfield LEDs overlap in the spectral domain, they could not be multiplexed. Hence, brightfield images were captured in time-sequence, while the remaining 40 darkfield images were captured using either 2-LED multiplexing or 4-LED-multiplexing. The total number of captured images was reduced by about half: from 49 to 29 (2 LEDs in parallel) or to 19 (4 LEDs in parallel), leading to reductions of capture times from 5 s of 3 s and 3 s respectively. However, reconstruction quality requirements were satisfied only for 2-LED multiplexing, which was used for data reconstruction.

Image reconstruction
All images were reconstructed using the quasi-Newton engine [20] whose convergence was accelerated with adaptive-momentum (ADAM) [48]. We obtained a low-resolution reconstruction of the sample and the pupil from the central camera, which was used as an initial estimate for the MOFPM reconstructions. Central camera reconstructions required up to 250 iterations to recover the optical aberrations. Afterwards, up to 100 iterations per camera were done to reach convergence. Reconstructions were performed by splitting the 2448 × 2048 pixel image FoV into 80 segments (256 × 256 pixels) to mitigate the issue of spatially varying aberrations. Reconstruction of a single FOV segment took 5 minutes using NVIDIA GeForce 1080 Ti GPU, producing a 2048 × 2048 pixel image. Since our microscope was finite-conjugate, the non-telecentric geometry produced a phase curvature in the sample plane. To avoid artefacts in the reconstruction, we used a phase-curvature correction method [49]. Since the reconstructed image segments in MOFPM are visually identical to those reconstructed using conventional FPM, the same stitching methods can be used to produce a single wide-field image. All image segments were blended together in ImageJ [50], to produce full-FOV images without visible discontinuities.