Towards Low-Photon Nanoscale Imaging: Holographic Phase Retrieval via Maximum Likelihood Optimization

A new algorithmic framework is presented for holographic phase retrieval via maximum likelihood optimization, which allows for practical and robust image reconstruction. This framework is especially well-suited for holographic coherent diffraction imaging in the \textit{low-photon regime}, where data is highly corrupted by Poisson shot noise. Thus, this methodology provides a viable solution towards the advent of \textit{low-photon nanoscale imaging}, which is a fundamental challenge facing the current state of imaging technologies. Practical optimization algorithms are derived and implemented, and extensive numerical simulations demonstrate significantly improved image reconstruction versus the leading algorithms currently in use. Further experiments compare the performance of popular holographic reference geometries to determine the optimal combined physical setup and algorithm pipeline for practical implementation. Additional features of these methods are also demonstrated, which allow for fewer experimental constraints.


Holographic CDI and phase retrieval
Coherent Diffraction Imaging, or CDI, is a scientific imaging technique used for resolving nanoscale scientific specimens, such as macroviruses, proteins, and crystals [1]. In CDI, a coherent radiation source, often being an X-ray, is incident upon a specimen, whereupon diffraction occurs. The resulting diffracted wave is then incident upon a far-field detector which measures the resulting photon flux. This photon flux is approximately proportional to the squared magnitude values of the Fourier transform of the electric field within the diffraction area. Given this data, the specimen's structure (e.g., its electron density) can then, in principle, be determined by solving the mathematical inverse problem of recovering a signal from squared magnitude measurements of its (oversampled) Fourier transform, which is known as the phase retrieval problem. Phase retrieval is a highly challenging inverse problem, which in general does not admit unique or closed-form solutions and can at best be approximately solved via iterative algorithms [2,3].
To improve this situation, a popular variant on CDI known as holographic coherent diffraction imaging or HCDI has been developed in which additional information is inputted and can be leveraged towards better solving the phase retrieval problem. Specifically, in HCDI a known "reference" object is placed adjacently to the imaging specimen. A schematic of such a setup is shown in Fig. 1. With this additional known information (e.g. the electron density of the reference), the resulting inverse problem, known as the holographic phase retrieval problem can be written abstractly as where R is the known reference object, F is a Fourier transform operator, and |·| denotes the pointwise absolute values for a set of measured values which correspond to the diffraction measurements recorded by a detector array. Remarkably, the additional knowledge of R greatly simplifies this inverse problem into a problem which has a unique solution that can be expressed via the solution of a system of linear equations [4]. This solution is an example of a classical image processing technique known as deconvolution. In practice, however, although simpler than for non-holographic CDI, imaging via HCDI is a more complicated problem than that of the idealized holographic phase retrieval problem of Eq. (1.1). In particular, this is due to the fact that data measurements are corrupted by Poisson shot noise, and low-frequency data is often occluded by a beamstop apparatus.More information about these practical considerations and how they can be mathematically modeled is given in Section 2. Figure 1: Holographic CDI schematic. The upper portion of the diffraction area contains the imaging specimen of interest, and the lower portion consists of a known reference shape. Image courtesy of [5].

Prior art
As briefly iscussed in Section 1.1, the knowledge of the reference values R allows for X to be solved for via a (linear) deconvolution problem, which provides exact reconstruction in the noiseless setting. We briefly sketch this deconvolution procedure and the leading algorithms to date for it's computation. Consider a specimen X ∈ C n×n and reference R ∈ C n×n which are separated by an n × n zero block to altogether form the hybrid object S ∈ C n×(3n) given by: Y is a noiseless version of the measured HCDI data. By the well-known Wiener-Khinchine theorem, A = F −1 (Y ) is equal to the autocorrelation of S. Moreover, this autocorrelation contains a submatrix which is equal to the cross-correlation of X and R [4], which we shall denote by Z = X R. Then, given the noisy data Y , the corresponding submatrix Z can be thought of as a noisecorrupted version of this cross-correlation. An estimate X for the specimen image can thus be given by a deconvolution formula, which is known as inverse filtering, as is given by: where F OS is an oversampled Fourier transform operator such that F Z and F OS (R) are of the same size, and we use notation for complex conjugation in the pointwise sense.
Without the presence of noise, X gives an exact reconstruction of X, i.e. X = X. Note as well that the inverse filtering formula of Eq. (1.3) requires the full zero separation given in Eq. (1.2). More recently proposed deconvolution algorithms, notably HERALDO [6] and Referenced Deconvolution [4], do not require this full separation. However, this comes at the expense of having more complicated reconstruction formulas, which are not computationally efficient for complicated reference objects R such as the uniformly redundant array (URA) reference [7] (which consists of a highly structured binary pattern).
A variant on the inverse filtering formula of Eq. (1.3) (which also requires the full specimen-reference separation) and incorporates a denoising method is known as Wiener filtering [8], and is given by where C is a constant term which ideally is equal to the reciprocal of the problem's signal-to-noise ratio. (In practice for HCDI, C can be estimated using a logarithmic search and by comparing the quality of the resulting X.) Wiener filtering is a classical denoising algorithm which is derived for an additive noise model, such as for Gaussian noise. As discussed in Section 2.1 , the Poisson shot noise can be approximated as Gaussian noise when the photon flux N p is large. Thus, this method has provided to be a popular algorithm for holographic CDI [8,9]. However, in the low-photon regime this assumption breaks down, and the Wiener filtering method thus performs poorly.
Another major drawback of these deconvolution methods is that they cannot directly account for missing data due to beamstop occlusion. In practice, the missing data thus must be estimated before these methods can be applied [8,7]. (This is often done in practice by interpolating with a Gaussian or error function.) A recent deterministic algorithm [10,11] is able to recover images directly with missing beamstop data via the construction and solution of a system of equations. This method, however, cannot be directly modified to make use of the denoising approach of Wiener filtering, and unlike direct deconvolution can give rise systems of equations which are numerically unstable [4,12].
This approach lacks a systematic framework, and does not produce high-quality results (especially given data in the low-photon regime).
More recent papers [13,14,15] have considered an approach to classical (i.e. non-holographic) phase retrieval involving maximum likelihood estimation (MLE). Our experiments demonstrate that these approaches alone are insufficient for quality image reconstruction given low-photon CDI data, and that the additional usage of a holographic reference object is crucial. As well, these recent works have focused on the theoretical and algorithmic aspects of this problem, and focus on particular optimization algorithms, being ADMM [13,15] or truncated Wirtinger flow (TWF) [14], respectively. [15] alone considers an instance of beamstop occlusion.
Other works on deep learning methods for low-photon imaging have recently been published, notably [16,17]. A recent preprint considers a hybrid Poisson-Gaussian noise model in an idealized problem setting [18].

Our contributions
In this work, we propose and study the advent of maximum likelihood estimation for holographic CDI. This method provides several practical advantages over classical algorithms. Specifically, it can accommodate low-photon data that is highly noise-corrupted, as well as missing data that is occluded by a beamstop. It also does not require the classical physical constraints of having at least two-times oversampled data, a zero separation between the specimen and the reference, and a rectangular geometry for the reference. In contrast to current HCDI methods which each require a specific reconstruction algorithm, the MLE optimization framework can also be robustly optimized via a variety of standard numerical optimization methods. In this work, we as well provide extensive and thorough testing of various practical CDI problem settings, such as the effects of variable photon flux values, the presence of a beamstop apparatus, and the usage of various popular reference objects.

Practical considerations and mathematical modeling
We summarize several practical HCDI considerations, and their mathematical modeling, which in practice further complicate the image reconstruction problem beyond that of the idealized holographic phase retrieval problem.

Poisson shot noise and the low-photon regime
The well-known Poisson distribution is a discrete probability distribution, which depends on a parameter λ > 0. A discrete random variable with this distribution -denoted by Z ∼ Pois(λ) -has as probability mass function given by As a consequence of the quantum dynamics of photon emission in a radiation source, the photon flux measured at a CDI detector follows a Poisson shot noise model. Specifically, let Y denote the average value of Y (averaged over the number of detector pixels), and N p be the average photon flux per pixel. The data measured at the detector (i.e. the number of photons recorded), at each pixel location (i, j) ∈ M, is modeled as [4]: For many biophysical applications of HCDI, such as imaging of proteins, cells, and tissues, the incoming photon flux N p must be limited so as to not cause damage to the specimen from overexposure to radiation [19,20]. In this setting, HCDI must operate in the low-photon regime, e.g. given data measurements for which N p < 10 [21]. Another setting in which low-photon HCDI arises is when X-ray energy resources may be limited, or sought to be minimized [21]. Since the level of noise corruption given by the Poisson shot model is inversely proportional to N p , this amounts to HCDI imaging given highly noisy measurements.
In this noisy setting, the classical denoising methods for deconvolution (e.g. see Section 1.2) break down, since they rely on the assumption that shot noise can be well approximated as Gaussian noise. This is based on the well-known behavior that the Poisson distribution approaches a normal (i.e. Gaussian) distribution as the value of the parameter λ increases -an assumption that breaks down as N p decreases.

Beamstop occlusion
In HCDI experiments, the central portion of diffracted radiation is often blocked from reaching the detector array by a beamstop apparatus (see Fig. 2). This is because the low-frequency content (i.e. the Fourier transform magnitudes) of the measured data is typically much larger in magnitude than that of the higher frequencies [4] (e.g. see Fig. 3). Thus, the low-frequency data must typically be excluded so that the range of measured values does not exceed the dynamic range of the detector sensors [22]. In this case, the data acquired is more realistically modelled as where B = F(R), |·| denotes the pointwise absolute value, denotes the Hadamard product (i.e. pointwise multiplication) and B ∈ R m1×m2 is of the form B ij = 0, |i| < ω 1 and |j| < ω 2 1, otherwise for some cutoff frequencies ω 1 and ω 2 .  Typical acquired HCDI data (which is the squared Fourier magnitude data corresponding to the setup shown in Fig. 5). Low-frequency content is the largest by several orders of magnitude.

Reference design
In the ideal (i.e. noiseless) setting, any known reference object satisfying mild constraints gives rise to exact image recovery via the solution of a linear deconvolution problem [4]. Practically speaking, however, given noisy measurements the choice of reference objects significantly impacts the quality of image reconstruction. The first reference object to be implemented for holographic imaging was the pinhole reference [23]. The pinhole reference (shown in Fig. 4), which is mathematically represented by a delta function, gives rise to the simplest system of equations for image reconstruction. Due to this simplicity as well as its historical familiarity, the pinhole reference has remained a popular reference choice in holographic imaging, including for HCDI.
Recent research analyzing the behavior of various reference geometries [4] has shown that amongst simple reference geometries and given mid-to-high photon data, a block reference (i.e. a square-shaped region of empty space adjacent to the imaging specimen, as shown in Fig. 4) produces the best image reconstruction quality.
Further improved image reconstruction (in the mid-to-high photon regime) has been shown to be achievable by a reference known as a uniformly redundant array (URA) [7], which consists of a highly structured binary pattern, as shown in Fig. 4. This fabrication and implementation of this reference, however, is more challenging and expensive than a simple reference geometry.

Physical constraints
The established, deconvolution-based algorithms for holographic phase retrieval require two physical constraints for the experimental setup and acquired data. Firstly, the acquired Fourier transform magnitude data must be oversampled by at least two-times in both the x-and y-directions. (More precisely, for an object of length n in the x-or y-direction, there must be at least m ≥ 2n − 1 collected frequency samples in the same direction.) Secondly, a minimum separation condition must be satisfied between the specimen and reference object. This condition, known as the classical holographic separation condition, states that for a specimen and reference each of size n × n pixels, there must be a zero-region of at least n × n pixels separating them. Physically, this zero region is realized as a portion of space where the transmitted electric field is equal to zero, i.e. where no radiation can be diffracted. An example specimen-reference setup satisfying this condition is shown in Fig. 5. (Note that this condition can be relaxed for deconvolution in the noiseless setting (e.g. see [4,6]), but is required to simplifly the mathematical expression for recovering X and when incorporating a denoising method, e.g. the Wiener filtering method discussed in Section 1.2.) As well, these algorithms are easily implemented for reference objects that lie within a rectangular region that is physically separated from the imaging specimen. When this condition is not met, a novel deconvolution scheme is needed, and which is typically unwieldly and does allow for efficient computation. An example of such an irregular setup is given in [10], which considers an annulus-shaped reference. This setup is introduced for a specific application and results in a highly complicated and inefficient deconvolution algorithm. Thus, the condition of a rectangular and separated reference object can be effectively considered as an additional constraint for deconvolution-based algorithms.

The HoloML objective function
To account for the practical considerations given in Section 2, from Eq. (2.1) we consider the measured data Y as being of the form Taking the approach of maximum likelihood estimation, we seek to determine the image X which maximizes the probability of obtaining the measured data Y . Let M denote the subset of data points M that are not zeroed out by the beamstop B. Given the Poisson probability distribution of Eq. (2.1) and the Poisson shot noise model of Eq. (2.2), and using the standard assumption that measured pixel values are independent (e.g. see [24]), it follows that the probability of obtaining the set of measured data Y as a function of X is given by Since the function log(·) is monotonically increasing, the global maximizer of g(X) is equal to the global minimizer of the corresponding negative log-likelihood function, i.e. of the function −log (g(X)) = The value of Y , while stricly speaking is a function of X, is assumed to be constant, since it does not vary significantly around the global minimizer X (see [4]). Thus, after factoring out the constant terms we arrive at the following objective function: To avoid dealing with the possibly irregular geometry of the set of non-occluded points M , we can reformulate this equivalently as (Note that the product with B ij is omitted in the rightmost terms here, since it leads to taking the logarithms of zero, which are undefined. It is valid to omit B ij here since for these terms Y ij = 0.) We shall term this the HoloML objective function, and seek the phase retrieval solution which is its global minimizer. Note that in contrast to the currently used algorithms discussed in Section 1.2, we have made no assumptions on R and do not require a minimum oversampling ratio for Y .

Optimization methods
For a general, complex-valued X the optimization problem of Eq. (3.3) can be optimized via the usage of the Wirtinger derivative, which is a popular generalization the real-valued derivative for complex functions [14]. The Wirtinger gradient with respect to X is given by where F † denotes the adjoint of F. In the case where X is real-valued (as is often assumed in HCDI), the (real-valued) gradient of Eq. (3.3) is given by Given these expressions for the gradient, the HoloML objective function can then be optimized via a variety of numerical solvers. In Section 4, optimization is performed via the usage of both the conjugate gradient and trust-region methods, which are representative examples of first-and second-order methods, respectively. The conjugate gradient method is a popular first-order method for unconstrained nonlinear optimization problems, and often converges more rapidly than the standard gradient descent (i.e. steepest descent) method. The trust-region algorithm, a second-order method, often provides faster convergence than first-order methods. (For the trust-region method, an approximate Hessian is typically implemented by numerical solver packages given the analytic expression for the function and its gradient.) It is observed that both these methods produce almost entirely identical solutions, both of which are of a high quality. This behavior is quite remarkable in light of the fact that Eq. (3.3) is a nonconvex objective function, and thus different methods could conceivably produce entirely different solutions. (For example, for the classical, i.e. non-holographic, phase retrieval problem, which is also nonconvex, alternating projection type methods produce may high quality solutions while first-and second-order methods typically fail [25].) The observed behavior that different methods produce similar high-quality solutions is indeed a surprising and attractive feature of the MLE method. In turn, this allows for flexible implementations that can be tailored to best suit particular HCDI applications.

Numerical experiments
Numerical experiments were conducted comparing the results of optimization algorithms applied to minimizing the HoloML objective function versus the current leading holographic phase retrieval algorithms. These experiments were conducted using size 256 × 256 pixel test images of biophysical specimens -the mimivirus [26], embryo [27], oocytes [27], S. pistillata [28], salmonella [29], and sifA protein [29]. For the experiments in the following three subsections, the setup used is shown in Fig. 5, where the test image, zero region, and reference object (being the URA reference) are each of size 256 × 256 pixels, altogether forming a hybrid object of size 256 × 768. The setup here with the zero separation between specimen and reference is to allow for comparison with the classical algorithms which require this, as discussed in Section 1.2.
Two-times oversampling was implemented when generating the corresponding Fourier transform magnitudes, which thus produced a data array of size 512 × 1536. This data was then subject to Poisson shot noise, whose average photon flux value is subsequently denoted as N p .
Two algorithms were applied towards optimizing the HoloML objective function, using the Manopt optimization package for Matlab. The first such method is the conjugate gradient algorithm (a standard first-order optimization algorithm) which is implemented given the HoloML objective function of Eq. (3.3) and its corresponding gradient given by Eq. (3.4). We term this algorithm HoloML-CG. The second method is an implementation of the trust-region method discussed in Section 3.2, and is termed HoloML-TR. For experiments using both HoloML-CG and HoloML-TR, 50 iterations were implemented per trial. The computation time per iteration for HoloML-CG and HoloML-TR for subsequent experiments were approximately Figure 5: An example specimen-reference hybrid object. The specimen X here is a 256 × 256 mimivirus image [26], and the reference R here is a 256 × 256 uniformly redundant array (URA) [7]. 0.1 seconds and 0.2 seconds, respectively, with experiments being run on a Windows XPS 13 9380 with an Intel(R) Core(TM) i7-8665U processor. We compare these new methods to the leading holographic phase retrieval algorithms in use to date, namely the inverse filtering [4] and Wiener filtering methods [8]. For experiments involving a beamstop, the missing data was replaced by an approximating Gaussian function before the inverse filtering and Wiener filtering methods were applied, as per a well-known technique [8,7] which improves image reconstruction (see Section 1.2). Each implemenation of the Wiener filtering method encapsulates several trials, whereby the constant term C (see Eq. (1.4)) is first set to the reciprocal of the estimated SNR value, and then scaled using a logarithmic search on the set of values [10 −10 , 10 10 ]. The output is then selected which minimizes the corresponding relative error given by Eq. (4.1).
The relative error used for these experiments is given by where Y 0 denotes the experimental data, and Y = |F(X) + B| 2 is the data corresponding to the reconstructed image X. For experiments involving a beamstop, this is replaced with Y = B |F(X) + B| 2 , as in Eq. (2.3). This error metric is standard and naturally applicable for HCDI data, for which only the measured data Y 0 is known (e.g. see [30,31]).

Low-photon imaging experiments
As discussed in Section 2.1, imaging given low-photon data is necessary for HCDI applications to biological specimens that are sensitive to radiation damage, and as well requires less energy resources. At the same time, low-photon data is corrupted by high levels of Poisson shot noise. This provides the main motivation for developing a maximum likelihood framework for holographic phase retrieval. To validate the performance of this method, we consider the comparative performance of the HoloML algorithms on various biophysical specimen test images that are corrupted with Poisson shot noise with an average photon flux value of N p = 1. The images reconstructed from these simulations are shown in Fig. 6, and the corresponding relative error values are shown in Fig. 7. In these experiments, the HoloML algorithms clearly produce the best image reconstruction, as well as the smallest relative error. This behavior is expected, since only the HoloML method accounts for high values of Poisson shot noise.

Varying the photon flux
The comparative performance of these algorithms at varying average photon flux values, specifically for N p = 1000, 100, 10, 1, 0.1, is studied. The resulting reconstructed images and corresponding relative errors are shown in Fig. 8 and Fig. 9, respectively. It is observed that as the value of N p decreases, the image reconstruction quality is dramatically improved using the HoloML algorithms.

Beamstop occlusion
We repeat the experiments of the following subsection given data that is occluded by a beamstop apparatus. Specifically, a size 25 × 25 region consisting of the lowest-frequency data is zeroed out, as is typicaly in HCDI experiments. When comparing with the inverse and Wiener filtering algorithms, a Gaussian function is fit to replace the missing frequency, as is commonly done in practice [8]. We observe in all experiments that the HoloML methods significantly outperform the classical algorithms, throughout all photon flux levels and especially within the low-photon regime. This is shown in Figs. 10 Figure 10: Reconstruction of biophysical test images from simulated low-photon data (with Np = 1) using various holographic phase retrieval algorithms, with low-frequency data occluded by a beamstop. The HoloML algorithms provide superior results.

Reference object performance
The choice of reference object implemented in holographic CDI can significantly effect the quality of the reconstructed image [4], and is a major design consideration. Thus, we seek to evaluate the performance of the HoloML method for each of the leading reference choices. To this end, experiments were performed in which the reference object was varied (while using the otherwise same simulation parameters). The reference choices implemented are the pinhole reference, block reference, and uniformly redundant array (URA) reference (see Section 2.3) as shown in Fig. 4, as well as no reference (i.e. a region consisting of all zero values). In these simulations, given various biophysical test images and for each of these reference choices, experiments were conducted for which data was subject to Poisson shot at N p = 1 and the HoloML-CG algorithm was applied. Fig. 14 and Fig. 15 show the results of these simulations, for data with and without beamstop occlusion, respectively. It is observed that the URA reference consistently produces the best image  reconstruction, while the block reference performs the best from amongst references with simpler geometries. This behavior is as well observed in prior theoretical and experimental works (which use different algorithms) comparing reference performance, as discussed in Section 2.3.

Breaking classical algorithm barriers
In contrast to the current methods discussed in Section 2.4, there is no minimal oversampling ratio necessary for implementing the HoloML methods. Numerical simulations demonstrate that these methods are capable of image recovery given data at a far smaller oversampling ratio, without loss of reconstruction quality, as shown in Fig. 16 and Fig. 17. This allows for higher resolution imaging for a given experimental setup [22]. As well, the HoloML methods do not require any minimum separation between the specimen and reference objects. Numerical simulations demonstrate that for the HoloML methods, the quality of image reconstruction is not signficiantly impacted by the distance between the specimen and reference objects, including when this distance is zero. This is illustrated in Fig. 18 and Fig. 19, which shows the reconstruction of the mimivirus image using the HoloML-CG algorithms given a specimen and reference input which are separated by a distance d, and given data at various photon flux levels N p . (The case where d = n coincides with the previous experimental setups, and is shown in Fig. 5.) By allowing for a smaller separation distance between the specimen and reference objects, or none at all, the HoloML methods thus allow for more robust and flexible design of HCDI experiments.
The HoloML methods also do not require a reference geometry which lies within a rectangle that is separate from the imaging specimen, as is essentially required for deconvolution-based methods (see Section 2.4). In Fig. 20 an example of a specimen-reference setup violating these classical requirements is shown, where the reference has an annular shape and surrounds the specimen. Alongside this setup is shown the recovery of the specimen (the mimivirus) via the HoloML-CG algorithm given noiseless data and data with N p = 1, respectively. In the noiseless setting, the recovery is essentially exact. And given the low-photon data, the recovery is of high-quality, and comparable to that achieved using standard references shapes (see Section 4.3). Note that HoloML algorithm and it's numerical implementations easily accommodate this irregular geometry, in contrast with the classical algorithms.  Figure 14: Reconstructed images from simulated low-photon data (with Np = 1), with no beamstop, acquired from setups with various reference objects. The best recovered image quality is provided by the URA reference, followed by the block reference.

Tabletop prototype experiment
The performance of the HoloML methods on real image data was tested using data from the following tabletop prototype experiment. An approximately 2mm × 2mm photograph of the well-known Cameraman test image was situated on a microscope slide and a triangular-shaped region was cut from the slide to form a reference object, as shown in Fig. 21. Approximately two-times oversampled Fourier transform magnitude measurements were collected via illumination from a He-Ne laser, which serves as a low-to-mid range photon source [32]. Image reconstruction was performed from these measurements using the inverse filtering, Wiener filtering, HoloML-CG, and HoloML-TR algorithms. The results of these reconstructions are shown in Fig. 22. It is evident that the HoloML methods produce superior image reconstruction compared to the other methods.  Figure 15: Reconstructed images from simulated low-photon data (with Np = 1) and with low-frequency data occluded by a beamstop, acquired from setups with various reference objects. The best recovered image quality is provided by the URA reference, followed by the block reference.

Conclusions and future work
A new algorithmic framework for holographic HCDI via maximum likelihood estimation, termed HoloML, is introduced and developed. This method provides superior image reconstruction given data that is highly corrupted by Poisson shot noise, as occurs in low-photon HCDI imaging. It as well gives far improved imaging given HCDI data that is occluded by a beamstop apparatus, as is typical in practice. This optimization approach is also highly robust in that it does not require the physical constraints needed for current algorithms (specifically being at least two-times oversampled data, a minimum reference-specimen separation distance, and a reference geometry that is contained in a rectangle that does not overlap with the specimen). Moreover, the lack of these conditions does not negatively impact the image quality. It is also robust algorithmically in that the HoloML objective function can be effectively optimized using a number of standard numerical algorithms, including both first-and second-order methods. This behavior is indeed novel, since the objective function is nonconvex, and is unprecedented when comparing with the behavior of other optimization methods for phase retrieval [25].
Based on these successful results on simulated data as well as the tabletop prototype experiment, it would be very interesting to implement the HoloML method on data collected from nanoscale low-photon HCDI    : Left to right: Specimen (mimivirus) surrounded by an annular-shaped reference object, reconstruction of the image using the HoloML-CG algorithm with noiseless simulated data, and reconstruction given simulated data with a photon flux of Np = 1. The recovery of the specimen is successful and comparable with that given a standard (i.e. rectangular) reference geometry.

Figure 21:
Setup for a prototype experiment with experimental imaging data. The Cameraman image is a miniature photograph of approximate size 2mm × 2mm which is situated on a microscope slide. A triangular-shaped region is cut from the slide to form a reference object. experiments. This framework could as well be applied to any holography data that is subject to Poisson shot noise. As well, it would be interesting to pursue a theoretical study of the function landscape for the HoloML objective function given by Eq. (3.3), similarly to other recent theoretical works on phase retrieval and optimization [33,14,34]. Considering the succesful application of various numerical methods towards optimizing this function, despite it's nonconvexity, a reasonable conjecture to investigate is that all of the function's local minima are within a small distance of the global minimizer. Another direction for future work would be to consider the effects of readout noise in CCD detectors. This readout noise adds a small Gaussian term to the Poisson shot noise model. While readout noise is largely negligible in comparison to Poisson noise and thus usually not analytically modeled [35,13,15], further refinements could conceivably be achieved via its explicit modeling.