Performance evaluation of Bragg coherent diffraction imaging

In this study, we present a numerical framework for modeling three-dimensional (3D) diffraction data in Bragg coherent diffraction imaging (Bragg CDI) experiments and evaluating the quality of obtained 3D complex-valued real-space images recovered by reconstruction algorithms under controlled conditions. The approach is used to systematically explore the performance and the detection limit of this phase-retrieval-based microscopy tool. The numerical investigation suggests that the superb performance of Bragg CDI is achieved with an oversampling ratio above 30 and a detection dynamic range above 6 orders. The observed performance degradation subject to the data binning processes is also studied. This numerical tool can be used to optimize experimental parameters and has the potential to significantly improve the throughput of Bragg CDI method.


Introduction
Understanding the role of atomic-level displacements on determining macroscopic physical properties and chemical reactivities requires quantitative and reliable characterization of strain field with adequate spatial resolution that is usually around or better than 10 nm and strain sensitivity that is on the order of subinterplanar spacing of the material being imaged. Bragg coherent diffraction imaging (Bragg CDI) fulfills these goals by analyzing the information encoded in the diffraction signal surrounding Bragg peaks, as the location of the Bragg peak is determined by the global periodic lattice structure and the intensity distribution in the vicinity of the peak is determined by the shape of the particle [1] and the localized displacement fields [2]. In Bragg CDI, by rocking the crystalline sample within a relatively small angular range (typically 1 <°), the three-dimensional (3D) diffraction pattern is collected frame by frame by a fixed two-dimensional (2D) x-ray detector [3].
Recovering the unmeasured phases of the diffraction pattern using phase-retrieval algorithms yields complexvalued crystal images in real-space, where the atomic-level displacement field is represented in phase shifts, determined by a projection of displacement u r  ( ) onto the momentum transfer vector q  [4]. The complete strain tensor field can be obtained by measuring projection components on multiple non-coplanar scattering vectors [5]. This unique capability to access displacement field in nanocrystals finds wide applications, especially in investigating morphology and displacement evolutions under varying external conditions, such as chemically derived stress [6][7][8], temperature introduced strain [9], dislocation propagation during crystal growth [10,11], laser pulse induced lattice dynamics [12,13], radiation damage on protein crystals [14,15], static pressure driven strain [16,17], lattice defects during battery charging [18][19][20], as well as ion-implantation-induced strains [21]. Bragg CDI has also been further developed to integrate with lateral scans for enlarged field of view [22][23][24][25][26][27][28].
The accuracy and reliability of the numerically recovered real-space images are determined by how faithfully the lost phase information can be reconstructed. The performance of this phase-retrieval process is influenced by experimental conditions, such as sampling ratio [29], detection dynamic range [30], noise level [31] etc. The continuous development of reconstruction algorithms and data processing techniques pushes the frontier of the detection sensitivity of Bragg CDI method. Systematic investigation of the performance and capability of phase recovery techniques in diffractive imaging experiments has been approached with numerical simulations [29,32]. However, to the best of our knowledge, none of the currently available tools is able to distinguish the phase components caused by the actual lattice displacement from the phase fluctuations that are statistical uncertainties introduced by the phase-retrieval process itself. In reality, these uncertainties due to the assumed initial random phases effectively define the reliability of the recovered phases and set the detection limit of the Bragg CDI method. To explore the performance of this technique, we develop a framework for forward modeling the diffraction pattern of a typical Bragg CDI experiment, with full control of measurement condition adjustments. The numerically simulated data are then fed into the well-established reconstruction process to produce real-space images. With no initial displacement and Bragg electron density variation as inputs, any deviation from uniformity in either the amplitude or the phase part of the reconstructed complex-valued image indicates artifacts induced by the method itself, beyond which the observed distribution cannot be associated to physical phenomena within the crystal structure. The study also shows the optimum experimental conditions for achieving the best performance of Bragg CDI method.

Forward modeling
Our approach is to generate diffraction data from crystalline particles by using analytical formulations [1] and then invert the simulated data using established phasing algorithms [33][34][35]. This methodology allows us to evaluate the accuracy of the outputs from the phase-retrieval analysis of the data against the input particle in a robust way and isolate the statistical errors from systematic errors that may be inherent in experimental setups.

Generating the ideal diffraction data at a reciprocal space point
In this study, we consider the diffraction phenomenon in the kinematic regime [36], which can be analytically described using the classical formulation of the kinematic scattering of x-rays from crystalline materials.

Atomic summation method
This is the most general formula to calculate the elastically scattered x-ray intensity from an irradiated particle within the kinematic diffraction regime and it is based on the summation of individually scattered x-ray beams from each atom. The summation requires as input only the coordinates of individual atoms inside the particle; hence the formula is applicable to compute diffracted signals of all types of materials; crystalline and noncrystalline. For an isolated particle irradiated by incoming plane monochromatic x-ray beams with wavevector k i In (1) the summation is performed over all atoms N within the irradiated particle and the vector, r n  refers to the coordinate of atom n with respect to an arbitrary origin. Due to the nature of elastic scattering, the magnitude of the incoming and outgoing wavevectors are the same, k k , where λ is the wavelength of irradiation. The multiplier f n of the exponential term is the atomic scattering factor which depends on the type of the scattering atom as well as the scattering angle defined by the incoming and diffracted x-ray beams.
Once the total scattering amplitude at a particular momentum transfer vector is calculated, the corresponding intensity can be obtained by multiplying it with its complex conjugate, We note that this formulation considers only the elastic scattering of the x-rays by the atoms of an irradiated particle and excludes other contributions to the diffracted signal, such as those due to inelastic scattering, refraction effects, detector counting statistics, as well as absorption of the incoming x-rays by the particle and background scattering.
Although the atomistic summation approach is a versatile tool providing the scattering signal for a general particle, it is computationally exhausting especially when the particle size is around 20-30 nm and larger. Beyond this size range, the number of terms to evaluate in (1) exceeds one million. For ideal crystalline particles in which the stacking of atoms follows a strict periodicity, an alternative approach based on Fourier transforms exists. Hence, for the purposes of this article, we will restrict our attention to crystalline particles and follow this alternative approach, also known as the Patterson formulation [1].

Patterson formulation
As apparent from the form of (1), the total amplitude of the elastically scattered x-ray beams from an irradiated, crystalline particle can also be written as a discrete 3D Fourier transform of the electron density inside the particle, where the momentum transfer vector q  acts as the particular Fourier variable. If we define a particle shape function y r  ( ) as one that is identically unity inside the particle and zero outside, the electron density inside the particle can be written as r y r r where r r ¥  ( ) is the three-dimensional infinitely periodic electron density function. Then the Fourier transform of the electron density inside the particle takes the following form: For crystalline materials, the amplitude of the unit building block, i.e. the unit cell, is termed as the structure factor of the material and can also be written in terms of a Fourier transform: where H  is the reciprocal lattice vector with components H ha ka la only inside the unit cell and a i , ) are the reciprocal space vector coordinates. Using (3) the infinitely periodic 3D electron density can be written in terms of a Fourier series involving the structure factor F h k l ¢ ¢ ¢ of the material at all h k l ¢ ¢ ¢ lattice vectors: where cell n is the volume of the unit cell. Substituting (4) for r ¥ , (2) can be rearranged: As shown in (5), the scattering amplitude of a crystalline particle with a shape function of y r  ( ) is proportional to the sum over the Fourier series expansion of the shape function itself evaluated at all reciprocal space points, i.e. h k l , , . However, a further simplification can be carried out for real diffraction experiments, since for most times the diffraction intensity is contributed by the terms evaluated at a few lattice points, particularly those that are closest to the momentum transfer vector, i.e.
Using this simplification and assuming only one lattice point, hkl, dominates the intensity distribution, the final form of the diffraction intensity can be written as: is the Fourier transform of the shape function evaluated at the Fourier variable q H h k l , , -  ( ( )). Equation (6) allows one to compute the diffracted intensity around a particular reciprocal space vector, H  , analytically in a single step. However, we note that this expression is accurate only in the close vicinity of the reciprocal space vector, i.e.
, since the exclusion of the remaining terms in the infinite sum in (5) may cause (6) to underestimate the true value of the theoretical intensity. This deviation (from the true intensity value) is expected to become more severe as the diffracting particle size reduces below 10 nm, because significant overlap between different lattice points would be predicted due to the Scherrer broadening phenomenon [38]. In such cases, one should include in the final form of the Patterson function (equation (6)) the contribution from neighboring reciprocal space points h k l , , ¢ ¢ ¢ ( ) as well. As another shortcoming, the structure factor term, F hkl , imposes a unique unit cell type representing the crystal structure of the irradiated particle; therefore Patterson formulation is not capable of modeling the intensity variations due to local modifications in the atomic stacking such as surface relaxation or volume defects. In order to model the intensity variations due to these irregularities in the crystal structure, one should use the more general, summation based approach (equation (1)).

Generating 2D rocking curve scans around a particular Bragg peak
The experimental geometry for a typical Bragg CDI rocking curve measurement is presented in figure 1. In the simulation process, an individual (2D) frame within a 3D data stack corresponds to a unique orientation of the particle with respect to the incoming x-ray beam. Depending on the orientation of the particle, the reciprocal space coordinate system also changes its orientation. Thus, for consistency, all coordinate vectors are transformed into the laboratory coordinate system before computing the diffraction intensity.
The process flows as follows: first, we generate grids with evenly distributed points where each point represents the position of a pixel. This synthetic detector is oriented in such a way that the diffracted beams captured by the central pixel make an angle of 2 B q with the incoming beam direction, k i  , where B q is the angle satisfying the Bragg equation, d 2 sin hkl B q l = , and d hkl is the interplanar spacing of the selected hkl reflection.
By distributing an appropriate number of pixels on the detector we cover a reasonable area in the reciprocal space and compute the diffracted intensities at each point on the detector grid. For each pixel with indices (i,j) the following set of equations is used to express the components of the incoming and diffracted wavevectors in terms of real space parameters: In (7), λ is the wavelength of the irradiation, d pix is the size of an individual detector pixel, Z sd is the distance between the particle center and the detector (grid) center, x i and y j are the coordinates of an individual pixel with indices (i,j) with respect to the detector center and xyẑˆˆstand for the unit direction vectors oriented as shown in figure 1. Using the transformed components of k i  and k d i j ,  in the laboratory coordinate system, the components of the momentum transfer vector q i j ,  are evaluated at each pixel position.
Next, the expected intensity at each pixel position is computed using the Patterson formulation (6) described previously. For this purpose, the particular shape function of the diffracting crystallite and its Fourier transform need to be computed. For a cubic particle with side length t and positioned at the origin of an arbitrary coordinate system, the Fourier transform of the shape function is calculated as: where q D  is the deviation of the momentum transfer vector q  from the reciprocal lattice vector H  with orthogonal components q x  ,q y  and q z  along the respective axes.
Evaluating (8) we obtain the following form: Combining (7) with (6) and (9), the diffracted intensity can be computed at each pixel (i,j) corresponding to the momentum transfer vector q i j ,  . This yields the 2D intensity distribution of the irradiated particle at one particular orientation. In order to generate the full 3D data set, the particle is rotated around the laboratory xaxis by a small amount in the vicinity of the selected lattice vector H  , and at each orientation of the particle, a new 2D intensity distribution is computed. Throughout the calculations the orientation of the detector is kept fixed.

Phasing of the 3D ideal diffraction data
In the phase-retrieval process, the smallest distinguishable feature of the corresponding reconstructed object, i.e. the spatial resolution of the reconstruction, is dictated by the data collected along the momentum transfer vector with the maximum magnitude. The reconstruction pixel size can, then, be calculated from the positions of pixels placed at the edges of the detector area, which measure the intensity along the momentum transfer vector with magnitude q max 4 sin = p q l  | | [39]. Using the small angle approximation for the sample rotation and the Bragg angle, this corresponds to an in-plane pixel size sp sp . The ultimate resolution of the 3D reconstruction also depends on the resolution perpendicular to the detector plane and this is set by the angular range of the particle rotation and calculated by sp z where N frames is the number of 2D patterns within a single 3D data stack and q D is the step size of the particle rotation [3].
In our simulations, we generate cubic data sets that are 128 128 128´matrices, in which the number of pixels along one side of the square detector is equal to the number of particle orientations centered at the Bragg angle. By selecting a proper value for the angular step size and range of the particle rotation, we set the reconstruction pixel size and oversampling conditions in all (q q q , , The simulated diffraction datasets are inverted using the phase recovery process outlined in the literature [10][11][12]. The phasing process starts by initializing the support size to 80% of the whole input array and assigning random values to the pixels within the support. The support function is refined using the shrinkwrap approach [40] with a Gaussian blurry function with 0.5 pixel width and 12% cutoff threshold. In all reconstructions in this study, 500 iterations are carried out, which is sufficient for the χ-square error metric to converge. The first 10 iterations are performed with the error reduction algorithm [33], while the remaining 490 iterations are performed with the hybrid-input-output algorithm [34]. The inherent ambiguities due to the Fourier transform operator which would potentially affect the resulting reconstruction quality, e.g. presence of twin image [41], are handled automatically by the algorithm. The final reconstructed object is obtained by averaging over the reconstructions resulting from every other iteration after 400 iterations. We noticed that alternating algorithms in a different manner or using different algorithms such as difference map [35] do not significantly change the obtained image quality.

Analysis of the reconstructions
In this section we present a statistical analysis on the variations within the reconstructed phase and amplitude distributions upon multiple runs of the phasing algorithm with the ideal diffraction data generated by analytical means described in the previous section. The artifacts revealed by the analysis of ideal diffraction data are intended to set the baseline for detectable phase features from an irradiated crystal and as a result, any additional contribution to the uncertainty in diffraction data such as noise or background scattering would be expected to result in amplified uncertainties in the reconstructed parameters.

Effect of oversampling ratio
In coherent diffraction experiments, one of the fundamental requirements to successfully retrieve phases from the modulus of the diffraction data is selecting a proper oversampling ratio, i.e., the minimum frequency with which the reciprocal space needs to be sampled. The lower limit of this sampling requirement to recover phases has been addressed for the CDI method with the transmission geometry. Successful reconstructions have been demonstrated with simulated data sets with oversampling ratios as low as 2.57 [29]. However, the optimum oversampling condition for the best performance of Bragg CDI has not been explored. In the following discussion, we investigate the performance of the phasing algorithm with varying 3D oversampling ratios, which can be computed via dividing the number of pixels inside the whole reconstruction volume by the number of pixels occupied by the particle.
For this analysis, we generate series of 3D diffraction data sets of ideal, cubic, crystalline particles with dimensions ranging from 500-1100 nm using (9). Without loss of generality, we set the Bragg angle ( B q ) to 30°f or all particle sizes, and assume a monochromatic planar illumination with a wavelength of 1 Å. In accordance with these setup parameters and the schematic shown in figure 1, we simulate a square detector with 128×128 pixels with 55 μm pixel size. The detector is oriented in such a way that the center of the detector makes an angle of 2 60 B q =°with the incoming x-ray beam. This geometry, where the incoming beam k i  , the diffracted beam directed at the detector center k d 0,0  and the normal to the diffracting planes, H  , are coplanar, satisfies the symmetric diffraction condition. A total of 128 frames of 2D diffraction patterns are generated during which the particle is rotated with 0.00315°angular step size around the laboratory x-axis. By selecting a sample to detector distance of 1 meter, we set the sample pixel sizes in all directions (sp sp sp , , x y z ) to 14.2 nm.
Since the data are generated from displacement-free crystals and no photon noise is added, uniform amplitude and phase are expected inside the perfect crystals. Any deviation from uniformity observed in reconstructed images is considered as artifacts introduced by the phase-retrieval process. A typical reconstruction is shown in figure 2. While the shape and size of the particle are reconstructed accurately, we find variations in both the recovered amplitude and phase images inside the crystal, as shown in the top panel of figure 2. This fluctuation is characterized by the deviation width (σ) estimated from Gaussian fitting of the histogram plots, as shown in the bottom panel of figure 2. With this criterion, a lower deviation width represents a better image quality. With each particle size, 5 independent reconstructions are repeated with random initial conditions. The final value of the deviation width and its uncertainty level are estimated from the averaged number and the standard deviation of five reconstruction results.
Applying this analysis to a wide range of particle sizes while keeping the input array size fixed ( sp  sp  sp  128 128 128 x y ź´) yields the relationship between the oversampling ratio and the image quality of the reconstructed amplitude and phase distributions. Shown as blue dots in figure 3, both the amplitude and the phase deviation decrease with increasing oversampling ratio. The descending trajectory flattens and the optimum image quality is achieved when the oversampling ratio is larger than 20.

Effect of resampling the data 3.2.1. Binning
The statistical analyses presented in the previous section are based on the phasing of diffraction data collected by point-pixel detectors, where each pixel is assumed to record the intensity only at an infinitesimally small point in the reciprocal space. In reality, the pixels of area detectors used in diffraction experiments record the number of diffracted photons within a finite solid angle defined by the pixel size. This implies that the photon counting is an integration process over the area of a detector pixel.
One way to simulate this integration effect is to generate diffraction intensity data using point pixels with finer intervals along x and y directions and then bin this larger dataset by summing pixel values residing in the target detector pixel size. For this purpose, we simulate 128 frames of diffraction data with 1024×1024 pixels and 6.875 μm pixel size. Then, adjacent 8×8 pixels on each frame are added up and the total intensity is assigned to a new single pixel, which gives 128×128 data array with 55 μm pixel size, the same setting as in the previous section. The binned datasets are reconstructed as described before, and the obtained image quality is analyzed using the same approach. The results are shown as red dots and curves in figure 3. The averaged variations in both the reconstructed phases and amplitudes are elevated by about one order of magnitude using the binned datasets compared to the point-pixel detector case (blue dots and curves). Similar to the point-pixel detector case, increasing the oversampling ratio decreases the average deviation widths for both the reconstructed phases and amplitudes, while the turning point of oversampling ratios for the best image quality is raised up to around 30. As the atomic displacement information is extracted from the reconstructed phase map in Bragg CDI, this phase distribution width effectively sets the displacement detection limit for this technique. When the oversampling ratio is above 30, we can see the phase deviation width settles down to 0.01 radian level, which is significantly smaller than the published results [4,5,7]. It implies that this inherent limitation has not been reached yet.

Skipping
The binning process during photon detection cannot be avoided in real experiments. However, it is quite possible that the raw data collected in typical Bragg CDI measurements will be resampled afterwards for increasing photon statistics or reducing the array size for more efficient data processing. Different approaches can be applied in this post-detection resampling process.
We test the method to reduce the array size by skipping data points [42]. In this case, only one corner pixel of each 8×8 pixel patch of the original 1024×1024 array is picked up and assigned to the new array. The array size is equivalently binned by eight times in both x and y directions, and the pixel separation (effective pixel size) is increased from 6.87 to 55 μm. Performing the same reconstruction process five times for each skipped data set and repeating the statistical analyses, we obtain the data depicted by green markers in figure 3. They almost overlap with the blue dots, and are much lower than the results from the binned datasets.
This analysis reveals that in order to achieve the best reconstruction image quality, one needs to exercise caution on selecting the resampling technique. The main reason why binning introduces additional variations is that by summing over sub areas of the detector, one effectively smoothes out the data. This effect is clearly seen in Figure 3. The average deviation widths (σ) of the reconstructed amplitude and phase with respect to the oversampling ratio. Blue dots and curves: the average deviation widths of the amplitude (left panel) and phase (right panel) distributions obtained from five independent reconstructions of intensity data calculated from ideal, crystalline, cubic particles of variable sizes using point-pixel detector array. The optimum image quality is achieved with an oversampling ratio over 20. Red dots and curves: the average deviation widths using 8×8 binned data, which simulates the integration process during photon detection. The optimum image quality is achieved with an oversampling ratio over 30. Green markers: the average deviation widths using 8×8 skipped data. figure 4. The initial 1024×1024 diffraction frame generated at the exact Bragg angle (figure 4(a)) is resampled with different scales using both summation and skipping approaches. The fringe visibility gradually decreases with the binning method (figures 4(e)-(g)), while the fringe visibility keeps the same for the skipping method (figures 4(b)-(d)). Line plots along a diffraction flare clearly show this effect, highlighted by red arrows in figures 4(h) and (i). The loss of fringe visibility and the smoothing of fine features in the intensity distributions destructively modify the diffraction pattern, thus lead to poorer reconstruction quality.

Effect of detection dynamic range
Another parameter that plays a critical role in diffraction experiments is the dynamic range of the intensity data. Since the diffracted power decays dramatically with spatial frequency [30], the detection dynamic range sets the range and quality of the accessible information. Hence, the standard practice is to increase the dynamic range as much as possible without damaging the sample by extending exposure times or accumulating repeated exposures. The dynamic range is also one of the key parameters of x-ray detectors: charge-coupled devices can have photon dynamic range as low as 10 2 , while the latest photon-counting pixel area detectors can achieve a counting rate more than 10 6 per second [43,44]. Nonetheless, improving the dynamic range of the data may not be possible in all experiments, particularly for radiation sensitive samples or studies with temporal constraints. Therefore, it is important to have a quantitative estimate of the expected variations resulting from the phasing of intensity data with limited dynamic range.
For the detection dynamic range study, we use the simulated data with 600 nm cubic particle. The dynamic range of the perfect dataset is about 10 24 . We tune the dynamic range by fixing the maximum pixel intensity and gradually increasing the threshold level for the minimum pixel intensity by setting pixel values lower than the threshold to zero. Figure 5 illustrates the effect of limited dynamic range on both the input data and the average variations in the resulting reconstructions. As seen in the right panel of the figure, the proportion of pixels with non-zero intensity values increases sharply as the dynamic range of the input data reduces from 10 8 down to 10 4 . The reconstruction image quality keeps improving until the dynamic range reaches 10 6 , after which the image quality maintains at the same level up to 10 24 , as shown in the left panel.
The effect of the decreasing dynamic range of diffraction data can also be clearly observed from the loss of sharpness in the reconstructed features. The right side of figure 6 illustrates the 3D reconstructed amplitude distribution of the ideal cubic particle with 600 nm size. As the dynamic range of the input changes from 10 8 to 10 4 , the sharpness of the edges and corners decreases dramatically. The edge sharpness is quantified by fitting the line cuts across the crystal surfaces. The left panel of figure 6 shows the fitted full-width-at-half-maximum (FWHM) at different detection dynamic ranges. The sharpest edge is obtained when the dynamic range is above 10 6 , which is consistent with the deviation width analysis of the reconstructed amplitude and phase as shown in figure 5. Considering the majority of Bragg CDI applications are conducted within hard x-ray regime, 10 6 detection dynamic range can be adequately achieved with modern pixel area detectors [43,44]. Left panel: averaged deviation widths of the reconstructed amplitude and phase using the ideal diffraction data from a 600 nm cubic particle at varying detection dynamic range. The oversampling ratio is 28 for this crystal size. The optimum image quality is achieved with a dynamic range above 10 6 . Right panel: central slice from the 3D ideal diffraction data of the 600 nm cubic particle displayed in logarithmic scale with three different dynamic ranges. Figure 6. Left panel: the sharpness of the reconstructed amplitude with simulated data from 600 nm cubic particle. The FWHM is estimated by Gaussian fitting line cut across edges of the recovered amplitude. The sharpest edge is achieved when the dynamic range is above 10 6 . Right panel, the 50% isosurfaces of reconstructed amplitudes at three dynamic ranges.

Discussion
In this study, we investigate some of the fundamental parameters in Bragg CDI experiments for their effects on the image quality of the reconstructed amplitude and phase. The list of parameters we consider, however, is by no means inclusive. One possible additional factor that may affect the reconstruction statistics is the aliasing effect, i.e. the diffraction intensity does not completely decay to zero within the detection area and violates the continuous boundary condition of the Fourier transform. In our framework, the forward modeling process is not a potential source of aliasing, since the intensity data are generated analytically from the Patterson formulation instead of a fast Fourier transform (FFT) algorithm. However, the phasing algorithm is based on FFT, hence the reconstructed amplitude and phases may reflect the influence. One general approach to mitigate the aliasing effect is zero-padding the data array [45]. To investigate whether the presence of aliasing effect in our reconstructions is significant, we zero-pad the original 128 128 128´data array to 160 160 160´, and reconstruct these arrays following the same procedure described earlier. With the simulated data from a 600 nm cube, when stringently imposing the zero-padded pixels, the obtained amplitude and phase deviation widths are 0.0089±0.0009 and 0.0035±0.0014, respectively, which are essentially unchanged comparing with the widths obtained with unpadded data, 0.0077±0.0005 and 0.0048±0.0023. It implies that aliasing at least does not make significant impact in the simulation setup. When allowing the zero-padded pixels to float during iterative reconstruction process, the amplitude and phase deviation widths increase significantly by one order to 0.043±0.007 and 0.036±0.005, respectively. One plausible explanation for the increase is that the floating zero-padded pixels effectively act as additional unknown variables in the phase-retrieval problem.
Besides the effects due to the nature of the phase-retrieval process, the diffraction geometry itself may also contribute to the uncertainties in the reconstructed objects. For instance, simulated diffraction data with high numerical apertures are expected to be more vulnerable to the curvature effect of the Ewald sphere, which requires correction in the intensities corresponding to high frequency regions. The deviation between a flat detector and a curved Ewald sphere is expected to add to the uncertainties in the reconstructions, especially from small nanoparticles. Likewise, increasing numerical aperture results in underestimation of simulated intensities at the high frequency regions relative to their true value, since the Patterson function is strictly accurate only in the close vicinity of the Bragg angle. Similar effects are expected in rocking curve scans with large angular ranges.

Conclusion
In summary, we establish a numerical framework to characterize the performance of Bragg CDI technique. The rocking-curve 3D diffraction data are simulated through forward modeling with tunable parameters to mimic realistic experimental conditions. The generated data are then reconstructed using well established phaseretrieval process to provide real-space images, whose quality is quantitatively evaluated. With ideal diffraction data obtained from a perfectly crystalline cubic particle, the simulation results show that the best reconstruction quality is achieved with an overall oversampling ratio in the range beyond 30 (equivalent to 30 3.1 1 3 = pixels per fringe) using detectors with finite pixel sizes. A minimum dynamic range of 10 6 is needed for the intensity measurements to reach ultimate reconstruction performance. Data reduction techniques such as the binning process carried out on the input data may result in degradation of obtained image quality. When the conditions for best performance are satisfied, the phase distribution width is about 0.01 radians, which is rather low compared with the majority of the reported experimental results. Nevertheless, this inherent limitation has to be considered when advancing this imaging technique for probing subtle displacement changes as expected in a variety of in situ studies. This developed simulation framework can effectively run numerical experiments to verify measurement feasibility and rationally optimize measurement conditions, which will significantly improve the throughput of Bragg CDI analyses.