Advanced high dynamic range fluorescence microscopy with Poisson noise modeling and integrated edge-preserving denoising

In the last decades, fluorescence microscopy has evolved into a powerful tool for modern cell biology and immunology. However, while modern fluorescence microscopes allow to study processes at subcellular level, the informative content of the recorded images is frequently constrained by the limited dynamic range of the camera mounted to the optical system. In addition, the quality of acquired images is generally affected by the typically low-light conditions that lead to comparatively high levels of noise in the data. Addressing these issues, we introduce a variational method for high dynamic range (HDR) imaging in the context of fluorescence microscopy that explicitly accounts for the Poisson statistics of the unavoidable signal-dependent photon shot noise and complements HDR image reconstruction with edge-preserving denoising. Since the proposed model contains a weight function to confine the influence of under- and overexposed pixels on the result, we briefly discuss the choice of this function. We evaluate our approach by showing HDR results for real fluorescence microscopy exposure sequences acquired with the recently developed MACSimaTM System for fully automated cyclic immunofluorescence imaging. These results are obtained using a first-order primal-dual implementation. On top of this, we also provide the corresponding saddle-point and dual formulations of the problem.


Introduction
A considerable number of unsolved problems in biology are related to processes at cellular or subcellular level. This observation emphasizes the need for imaging methods that are able to resolve objects and structures at such tiny scales. One such technique is fluorescence microscopy: thanks to improvements with regard to image acquisition and analysis methods as well as advances in fluorescent labeling using genetically encoded tags, it is now indeed possible to study many biological processes even at the molecular level (cf e.g. [1][2][3][4][5][6]). Moreover, recent cyclic approaches [7][8][9][10][11][12], that alternate between staining, image acquisition, and erasure of the fluorescence signal, allow to combine the in situ information of a plethora of markers on a single sample, thus enabling, for example, insights into tumor microenvironments. In this context, it is noteworthy that one commercial system for cyclic immunofluorescence was recently launched by Miltenyi Biotec. 3 To make a long story short, all these developments turned fluorescence microscopy into an indispensable tool for modern cell biology and immunology.
Nevertheless, there are still factors that restrain the informative value of fluorescence images. In particular, the dynamic range of fluorescence emitted by the stained specimen is often larger than can simultaneously be dealt with by the sensor of the camera mounted to the optical system. The latter finds its expression in regions of under-or overexposed pixels, where differences in the fluorescence signal are no longer reflected by different intensity values in the image but are completely hidden in the background noise or appear as areas of pure white, respectively. We thus face a loss of information that hampers, for instance, accurate object detection and segmentation, which in turn constitutes an essential preparatory step in the context of many cell analysis tasks. As a workaround, modern microscopes frequently offer acquisition of not only a single image but of an entire sequence of co-registered images with varying exposure. For a suitable choice of exposure times, this acquisition protocol then allows to capture the existing differences in fluorescence for every part of the sample in at least one image. Yet, it would be even more desirable if one could fuse all images of the exposure sequence to obtain a single consistent representation of the brightness distribution for the whole field of view (FOV). This is just the objective of high dynamic range (HDR) imaging [13][14][15][16], which is now a common image processing technique for conventional digital photography. With regard to the latter context, a series of algorithms for combining differently exposed pictures into a single image with extended dynamic range have been proposed at the end of the last century [17][18][19][20]. All these well-established approaches have in common that they provide an HDR radiance map, where pixel values are proportional to the true radiance in the scene. Next to the extended dynamic range, it is especially also this latter property of the radiance map that seems attractive for fluorescence microscopy applications, since it not only enables a more exact qualitative analysis of the specimen but also fosters a quantitative evaluation of the data.
To clarify our contribution in this paper, let us point out that the idea of considering HDR imaging also for optical microscopy is not entirely new: Indeed, Bell et al [21,22] and Eastwood and Childs [23] considered HDR imaging for bright field microscopy of stationary and moving specimen, respectively. And as regards fluorescence microscopy, more recently Bal [24] and Vinegoni et al [25,26] presented some first encouraging HDR results, which seem to confirm our intuition that HDR imaging is also beneficial for this application. However, while in the context of photography HDR imaging has been studied extensively (cf e.g. [13][14][15]27] and references therein), its potential for optical microscopy appears far less explored. Specifically, to the best of our knowledge, no attempt has yet been made to explicitly account for the significantly lower light conditions typical of microscopy as compared to conventional photography, which render a more careful treatment of noise necessary. The latter particularly holds for photon shot noise, since as opposed to dark noise and read out noise the former cannot be substantially reduced by enhanced camera design but, caused by the particle nature of light, constitutes the unavoidable part of image noise (cf e.g. [4,5]).
Motivated by these considerations, we introduce a novel method for HDR imaging in the context of fluorescence microscopy that explicitly takes the Poisson statistics of the signal-dependent photon shot noise into account and complements HDR image reconstruction with edge-preserving denoising. More precisely, we solve the inverse problem of reconstructing an image that reflects the true range of fluorescence in the FOV from an exposure sequence of noisy microscopy images by taking a variational approach, i.e., by minimizing an appropriately designed energy functional. As usual, the latter is composed of a task-tailored data fidelity term and a suitable regularization, whose impact on the result is controlled by a so-called regularization parameter. Hence, let us now further specify the energy functional that we consider in this paper: adopting the perspective of Bayesian modeling, we derive a weighted Kullback-Leibler divergence-type term to grant each image of the exposure sequence an appropriate level of influence on the searched-for HDR radiance map. As regards the regularization term, we decided for the famous total variation (TV) functional. First proposed as a regularization for image denoising by Rudin, Osher and Fatemi [28] in 1992, this functional has subsequently been considered for a sheer endless number of different imaging tasks, including image deblurring, inpainting and superresolution, medical image reconstruction, stereo matching and optical flow estimation to name only a few (cf e.g. [29,30] and references therein). Its popularity is mainly attributable to the TV functional's distinctive feature of promoting piecewise constant solutions with sharp edges, a property that also appears highly desirable in the context of microscopy to ensure low-noise results at the highest possible true resolution. We would like to point out, however, that the proposed variational method for HDR fluorescence microscopy is straightforwardly extendable to any proper, convex and lower semicontinuous regularization functional.
The remainder of the paper is organized as follows: At first, we reconsider the earlier mentioned well-known HDR imaging approaches in the literature, which leads us to the observation that HDR imaging is frequently tackled as a two-step procedure, where the first step consists in estimating the so-called camera response function (CRF). The latter characterizes the typically nonlinear relation between radiance values in the scene and intensity values in acquired images and constitutes an essential ingredient for the computation of an HDR radiance map from an exposure sequence. Subsequently, we clarify our notation used throughout this paper and explain the general problem setting. In section 2.4, we then derive our variational method in detail. Since, similar to existing HDR imaging approaches, the latter respectively its data fidelity term is endowed with a weight function to account for the limited significance of under-and overexposed pixels, we also discuss various ideas for choosing this function. Numerical results for real fluorescence microscopy exposure sequences acquired with the novel MACSima TM Imaging Platform are presented in section 3. We complete the paper with a brief conclusion and some comments on possible extensions of our approach that might serve as a starting point for future research in this area. Finally, let us point out that this paper is equipped with an appendix, in the first part of which we review the classical HDR imaging approach by Debevec and Malik [18] from a variational perspective. It is also in the appendix that we provide the associated saddle-point and dual formulations of the proposed variational HDR fluorescence microscopy approach and present a possible numerical implementation.

HDR fluorescence microscopy: methodological details
We begin the current section by reconsidering the previously mentioned renowned works addressing HDR imaging for photography [17][18][19][20], since they serve as a natural starting point for our reflections about the generation of an HDR radiance map from an exposure sequence of fluorescence microscopy images. Regarding these papers, it is striking that they all share the idea of employing the following two-step procedure, which we will also adhere to in this paper: 1. Provided that the camera response function (CRF) is not a-priori known, estimate this function from a suitable selection of pixels sampled from an exposure sequence.
2. Once an estimate of the CRF is available, use this function to compute a high dynamic range (HDR) radiance map from a set of differently exposed images, commonly referred to as low dynamic range (LDR) images.
On the other hand, the cited works differ in the way each of the problems formulated above is solved. Since the novel approach we present in this paper uses respectively directly builds upon the work of Debevec and Malik [18], we recall their approach for tackling task one and, in particular, review this method from a variational perspective in appendix A. Note that, for the sake of completeness, we also included a concise comparison of all approaches proposed in the aforementioned papers [17][18][19][20]. Here we shall now continue by establishing a number of notations and specifying the general setting.

General setting
Let us first point out that from now on we consider physical quantities with a spatial distribution as well as the images outputted by the camera as spatially discrete objects, meaning that we assume that they are defined on a regular Cartesian grid of size N 1 × N 2 : where h denotes the grid step size. Actually, this implies that all these objects are represented by (N 1 × N 2 )-matrices, where the n n , th matrix entry 1 2 ( )-corresponds to the quantity's value within the area [n 1 h, (n 1 + 1)h] × [n 2 h, (n 2 + 1)h]. However, since in the context of HDR imaging one usually deals with exposure sequences rather than with single images and this often requires the use of an additional index to specify which picture of the sequence is currently regarded, another approach seems more convenient to keep equations readable: rather than using matrices one identifies the aforementioned quantities with N-dimensional vectors, N = N 1 N 2 , obtained by raster scanning the matrix entries. Following this line, we henceforth use the small letters n and p to refer to the n-th spatial position in the p-th picture of the exposure sequence.
Since with respect to the images outputted by the camera another set of importance is the set of possible intensity values  I I ,..., min max ≔ { }, we shall also briefly comment on this set. In particular, we note that usually = I 0 min while I max is related to the bit depth of the images such that typical values for I max are 255, 4095 and 65 535 corresponding to 8-, 12-and 16-bit images, respectively. Moreover, we take the chance to mention that we denote the cardinality of  by -+ R I I 1 max min ≔ . After these rather general remarks, we shall now turn to objects that are more specifically related to HDR imaging. One key ingredient mentioned in both tasks in our above list is the camera response function (CRF). As mentioned earlier, the CRF is a typically nonlinear function that determines how radiance, or, more precisely, radiant exposure, at each element of the photosensor array of the camera becomes a pixel value in an image. Noticing that the radiant exposure is characterized as the irradiance of a surface integrated over the time of irradiation, this motivates the introduction of the following set of notations: -the camera response function (CRF) with the implicit understanding that f is applied pointwise whenever its input argument is a vector or a matrix,

•
Î  Z N -the camera output (an image corresponding to a certain exposure time), • Î  H N -the radiant exposure (in Joule per square meter), • Î  E N -the irradiance (in Watt per square meter) and • D Î  t -the exposure time (in milliseconds).
In the light of the aforesaid, these physical quantities are then related via the identity If we moreover assume that the brightness distribution in the captured scene does not change over the period of exposure, we obtain a much simpler characterization of the radiant exposure. More precisely, in this case, the latter corresponds just to the product of irradiance and exposure time, i.e., = D H tE, 2 ( ) a relation known as reciprocity. Note that in consequence of the above assumption, we here and in the following drop the time-dependency of the irradiance and rather denote the latter by E than by E(t). Then, inserting the above identity in (1), we eventually end up with the following forward problem While in general the CRF is a nonlinear function, it still seems natural that higher radiant exposure values are mapped to higher intensity values. In the following, we therefore make the standard assumption that f is strictly monotonically increasing, which in particular implies that its inverse function f −1 is well-defined. Taking into account that all physical quantities listed before are nonnegative, we note that an alternative formulation of (3) reads where ln denotes the natural logarithm. For the sake of better readability, let us thus introduce the following additional notations: -the inverse of the CRF and where in both cases we again implicitly assume that g and h, respectively, are applied pointwise whenever the input argument is a vector or a matrix. As a next step, we shall take into account that in this paper we are actually interested in situations where we have exposure sequences rather than a single camera output. More precisely, we assume that we are given a sequence of say P pictures Z 1 ,K,Z P , whose content differs only with respect to the images' exposure times Δt 1 ,K,Δt P , the latter being assumed to also be given. On the additional assumption that irradiance changes between the different images of the exposure sequence are neglectable, (4) holds for every image of the sequence and we thus have We thus have everything at hand to turn our attention to task one.

Task one: CRF estimation
In the situation where we want to estimate the CRF from a given exposure sequence, we take a sample of say S pixels, S N, from each picture of the sequence and denote the corresponding intensity values in an (S × P)-matrix Z sample such that the (s,p)-th matrix entry corresponds to the intensity value of the s-th selected pixel position in the p-th picture. For s ä {1,K,S} and p ä {1,K,P}, we thus obtain the relation where Z sample and Δt = (Δt 1 ,K,Δt P ) are given and we would like to obtain estimates for the irradiance E at the S chosen pixel positions and in particular an estimate for the camera response function = f h exp 1 ( ( )) . With regard to (6), it is then clear that a good estimate of the camera response should minimize the distance (in a certain sense) between the two sides of the equation. One simple method to estimate the CRF that is based on this observation and that we will always use in this paper is the by now classical approach by Debevec and Malik [18], which, as we explain in appendix A, amounts to solving the optimization problem: denoting the characteristic function of the set and α suitably chosen. We thus proceed to task number two.

Task two: HDR radiance map computation-the standard approach
The most prevalent idea in the literature (cf e.g. [17-19, 22, 26]) addressing the problem of computing an HDR radiance map from a given exposure sequence of LDR images, where it is now assumed that an estimate of the CRF (respectively its inverse or the natural logarithm thereof) is available, can be summarized as follows: We start from one of the expressions in (5) and rearrange such that the pointwise irradiance respectively its natural logarithm stands isolated on the left-hand side of the identity. To take full advantage of all available information, one then computes the weighted average over all pictures of the exposure sequence of the right-hand side of that identity and identifies the pointwise irradiance with this expression. Assuming that the now given sequence consists of P′ LDR images, we thus obtain (cf , , , ]is again a suitable weight function whose choice varies between the different cited approaches. Indeed, while Debevec and Malik [18] proposed to simply use the same hat function as for the estimation of the camera response (8), which we from now on denote by w DM , Mann and Picard [17] and Mitsunaga and Nayar [19] suggested to choose the weight function in dependence of the given CRF. More precisely, claiming that the sensor output is expected to be most reliable when the CRF is particularly sensitive to radiance changes and least reliable when the CRF is flat, Mann and Picard [17] proposed to use the derivative of the given CRF as an indicator for the influence of each value, i.e., Mitsunaga and Nayar [19] went even a bit further as they argued that not only the slope of the CRF but also the signal-to-noise ratio should be taken into account to assess the reliability of each pixel. Assuming the noise in the given pictures to be intensity independent, Mitsunaga and Nayar [19] thus suggested to determine the weight for each measurement according to Finally, we shall mention that the trivial weight function, of course also constitutes a feasible choice of w in (9) and (10), respectively, in which case the HDR radiance map at each pixel position is characterized as the plain average of the irradiance in terms of one of the expressions in (5) over all pictures of the exposure sequence. We will come back to this list of weight functions in the further course of this paper. Based on the information collected in this paper so far, we shall now introduce our novel approach that serves as an alternative to the computation of HDR radiance maps by means of (9) and (10), respectively, and seems particularly suited for exposure sequences acquired in low-light situations such as fluorescence microscopy.

Task two: the proposed variational HDR fluorescence microscopy approach
In the previous paragraph, we have recalled what seems to be the standard approach for computing an HDR radiance map from a sequence of P′ registered LDR images with varying exposure provided that an estimate of the camera response is available. In particular, we have seen that this standard method largely relies on one of the equivalent characterizations of the irradiance in (5) and thus widely ignores the presence of noise in the given images. Concerning the computation of the irradiance by means of (9) and (10), respectively, only the averaging over all P′ pictures of the given exposure sequence appears as an attempt to increase the robustness against random intensity oscillations in the data. However, the extent to which the averaging really reduces the level of noise at a certain pixel of the HDR radiance map critically depends on the number of pictures in the given sequence that are neither under-nor overexposed at this position (cf also [21]). Concerning low-light applications such as fluorescence microscopy this number is unfortunately rather low. For one thing, exposure sequences in fluorescence microscopy commonly consist of a very confined overall number of images, typically two or three, to minimize photobleaching. Secondly, a characteristic feature of low-light images is that typically only a small portion of pixels is well-exposed while large areas of such images are dark, or, in other words, underexposed. With regard to fluorescence microscopy the weighted average is thus expected to only have a minor denoising effect, which seems particularly problematic, since dark image areas are especially prone to noise. For these reasons, we conclude that the standard approach in (9) and (10), respectively, seems suboptimal for the setting we are interested in. As a consequence, we shall now introduce an alternative approach that enables a more explicit treatment of noise in the data. To derive our novel method for the reconstruction of fluorescence microscopy images of extended dynamic range, we take a variational approach, i.e., we characterize the desired HDR radiance map as the minimizer of a suitable energy functional of the general form The first term on the right-hand side in a certain sense measures compliance with the given data-in our case, a given exposure sequence = ) and the CRF f-and is therefore commonly referred to as data fidelity term. The latter term is the so-called regularization term that penalizes deviations from a-priori information about favorable properties of the desired estimate. The last key component is the regularization parameter α that allows to balance the effects of the data fidelity and the regularization term, the specific forms of which we shall concretize in the following.
Similarly as for the derivation of the standard approach in the previous section, we start from one of the identities in (5), precisely from the formulation in the second line. However, differently than before, we now take into account that measurements are always subject to different types of noise-in the case of fluorescence microscopy photon shot noise, dark noise and read out noise. While the latter two can be minimized by sensor cooling and advanced camera design, the photon shot noise, that is caused by the particle nature of the incident light, constitutes the unavoidable part of the noise (cf also, e.g., [31]). On the assumption that the fluorescence microscope is equipped with a high-quality, carefully tuned camera, it thus seems justified to claim that in our context the counting of photons is the dominant source of the noise (cf also, e.g., [32]). Taking into account that the random arrival of photons at each element of the photosensor array can be modeled by a Poisson process whose expected value and variance depend on the total number of photons, a more accurate variant of the second line of (5) reads with  n p , denoting a realization of a Poi((Δt) p E n )-distributed random variable. Referring to ideas from statistical modeling, it thus seems expedient to characterize the unknown radiance map Î  E N as a maximum a-posteriori probability (MAP) estimate of the conditional probability  E g Z ( | ( )), i.e., where the latter identity is a direct consequence of Bayes' formula. Now it is clear that Ê can equivalently be characterized as the minimizer of the negative natural logarithm of the above probability. Hence, if we additionally factor in that a standard approach for the choice of  E ( ) assumes that the latter is a Markov random field or a Gibbs random field and that the denominator in (16) is independent of E, we obtain the following characterization of the desired HDR radiance map: For further details on the derivation of this statistical problem formulation we would like to refer e.g. to [33], section 2.2, [34] and [35], section 4.2 and references therein. Given the statistical characterization of Ê in (17), the most reasonable approach for choosing the data fidelity term of our variational method is to identify the latter with the so-called negative logarithmic likelihood function - g E Z ln( ( ( )| )). Accordingly, we now set If we then take into account that, by means of our noise model in (15), g(Z n,p ) is identified with a realization of a Poi((Δt) p E n )−distributed random variable, we can specify the likelihood  g E Z ( ( )| ) as follows: where we used the standard assumption that the random variables are independently distributed for all n ä {1,...,N} and Î ¢ p P 1 ,..., { } . Inserting (19) in (18) and additionally factoring in that the denominator of the expression in (19) is independent of E and can thus be neglected, we obtain the following expression, which we, slightly abusing notation, again denote by Following the line of argument in existing literature (i.a. [17][18][19][20]), this approach can be further enhanced by incorporating a weight function   w: 0,1 [ ]that grants pixels providing more reliable information a bigger influence on the result. Then the final data fidelity term that we propose in this paper reads  (20). Hence, let us first point out that the trivial weight function w uniform given in (13), the hat function w DM as of (8) suggested by Debevec and Malik [18], and the CRF-based weight function w MP given in (11) introduced by Mann and Picard [17] remain feasible choices for w, since the argumentation for either of these options is independent of the specific noise model. However, as concerns the weight function w MN in (12) that has first been proposed by Mitsunaga and Nayar [19] the latter no longer applies. While the central idea that the weight for each pixel should depend on both the CRF's sensitivity to intensity changes and the signal-to-noise ratio still seems plausible, Mitsunaga and Nayar's assumption that the noise is signal-independent does not fit our Poisson noise model with expected value and variance depending on the total number of photons. Yet, if we reconsider their derivation of the weight function in [19], an intermediate step suggests a weight function of the general form where s  denotes the standard deviation of the measurement noise. Taking into account that the variance of a Poisson distributed random variable is just equal to its parameter, which by definition of the CRF can be expressed by means of its inverse g, we obtain the following Poisson noise adapted variant of Mitsunaga and Nayar's weight function From our experience, depending on the given estimate of the CRF, it might happen that this weight function exhibits strong oscillations, in particular towards the lower bound of the intensity range. To avoid this inconsistent behavior, we adjust the weight function once more by filtering the expression in (21) such that we finally obtain the following smoothed Poisson noise adapted variant of Mitsunaga and Nayar's weight function where G σ denotes a 1D Gaussian filter with a suitably chosen standard deviation σ. Note that in order to guarantee that the weight function really maps to [0, 1], we in a final step always normalize the function by dividing by its maximum.
Having introduced our task-tailored weighted data fidelity term, we now turn to the second term of our energy functional in (14) and (17), respectively. As stated before, this term allows to incorporate prior knowledge about desirable properties of the searched-for estimate into the optimization problem. In our context, we would like the regularization term to further contribute to the reduction of noise in the resulting HDR radiance map, in particular in the large dark background areas. Moreover, the latter should be accomplished without blurring the result, since in microscopy a central criterion to judge quality is the real resolution defined as the shortest distance between two points on a specimen that can still be distinguished in acquired images. Given these requirements and our academic background, it seems natural to choose the regularization term to be the total variation (TV) functional, which has first been proposed as a regularizer for image denoising by Rudin, Osher and Fatemi [28] in 1992, and due to its distinctive feature of promoting piecewise constant solutions with sharp edges has subsequently been successfully applied to a whole zoo of different problems in imaging (cf e.g. [29,30] and references therein). In our discrete setting, the term total variation refers to the following family of equivalent seminorms with ∇E denoting the image gradient and | · | r being a suitably chosen pointwise applied vector norm, where in the context of imaging r = 1 and r = 2 are most common. For r = 1, (23) is the so-called anisotropic TV, since in this case edges that are parallel to the coordinate axes are favored over edges in other directions. For r = 2, (23) is commonly referred to as isotropic TV as all directions in the image are treated in the same way, which finds its expression in rounded corners. Taking into account that in fluorescence microscopy our main interest is the study of biological samples, we rather expect objects in the HDR radiance map to be of roundish shape than to feature many right-angled corners. Hence, we conclude that in our setting the isotropic variant of TV appears as the more promising approach and we propose to use the following regularization term where ∇ 1 E n and ∇ 2 E n denote the image gradient at the n-th pixel in horizontal and vertical direction, respectively. For further information on the TV functional and its properties we refer the interested reader e.g. to [36], section 6.3, [35], section 4.4, [29] and [37], Chapter 23, as well as references therein.
To sum up, our variational approach for the reconstruction of HDR radiance maps from fluorescence microscopy exposure sequences, which explicitly accounts for the Poisson statistics of the unavoidable signaldependent photon shot noise and complements HDR image reconstruction with edge-preserving denoising, reads We conclude the current paragraph by pointing out that we also provide the saddle-point and dual formulations of the above optimization problem in appendix B. Moreover, please note that details on the numerical implementation can be found in appendix C.

HDR fluorescence microscopy: numerical results for real data
Having explained our HDR reconstruction approach, we shall now provide some numerical results for experimental fluorescence microscopy data acquired with the novel MACSima TM Imaging Platform. As is common practice, we first provide detailed information on the biological sample preparation and the acquisition of the fluorescence exposure sequences, then we show and discuss the corresponding numerical results.

Biological sample preparation and data acquisition
The biological sample preparation was conducted as follows: 3 μm Formalin-fixed paraffin-embedded (FFPE) sections of human tonsil samples (courtesy of the Institute for Pathology, University of Cologne) mounted on 170 μm thick glass plates were dewaxed, rehydrated and subject to antigen retrieval. Lastly, a 24 well frame was attached to the glass plate. For the detection of cell nuclei and ROI (region of interest) definition, a DAPI pre-staining was performed prior to the multiparameter immunofluorescence-based imaging on the MACSima TM Imaging Platform. Generally speaking, the MACSima TM Imaging Cyclic Staining (MICS) technology consists of the following automated steps: immunofluorescent staining using a certain biomarker panel, sample washing, multi-field imaging, and signal erasure by photobleaching, using up to three fluorochrome-conjugated antibodies (PE, FITC, APC) per cycle, where the use of different fluorochrome-conjugated antibodies in one cycle allows to   reduce the overall number of MICS cycles. It allows thereby the application of hundreds of binders to a single specimen. The data used in this paper (cf figure 2) were derived from a bi-color MICS experiment (PEand FITCconjugates were used for staining and imaging in the same cycle) with the biomarker panel including the biomarkers listed in table 1 and the entire field of view consisting of nine (3 × 3) fields as can be seen in figure 1. The images were obtained in an epifluorescence setup with a20× objective (NA 0.75, 170 micron coverslip glass) using the following exposure times: 7.0 ms for DAPI, 8.0; 32.0 and 128.0 ms for FITC-conjugates; 5.0, 20.0 and 80.0 ms for PE-conjugates. Note that the images were acquired with three different exposure times that were specifically optimized for the MACSima TM Imaging Platform and the required disposables to create high dynamic range (HDR) radiance maps, not single-shot images. For the sake of completeness, we shall also mention that autofocusing was achieved by a hardware autofocus measurement. Remaining fluorescence signals after photobleaching were imaged separately (bleach cycles) using the same exposure times. Thereby the bleaching was initially (000_cycle) performed for 10 min to erase the autofluorescence signal and for 2 min in all subsequent staining cycles (001_cycle to 011_cycle). Finally, please note that after the described image acquisition procedure an image processing pipeline comprising the following steps has been applied: image registration, sub-image extraction (4608 × 2560), hot pixel correction, bleach image correction (bleach image from previous bleach cycle is subtracted from stain image in subsequent cycle), 2× binning (2304 × 1280 pixel).

CRF estimation and HDR radiance map reconstruction results
Following our discussion in the previous sections of this paper, we shall first use the exposure sequences shown in figure 2 to estimate the corresponding camera response function (CRF). In this context, it is worth highlighting that two different fluorochromes, namely FITC and PE, have been used to stain the specimen and acquire these images. While the former emits light at wavelengths of about 440-660 nm with a peak at 519 nm giving it a green color, the latter has an emission spectrum of about 537-685 nm with a peak at 573 nm corresponding to an orange-yellow color of light. Then, taking into account that the irradiance E per unit time amounts to the integral over all wavelengths of the spectral irradiance E λ , i.e., it is evident that the irradiance is linked to the wavelength range of the incident light. Furthermore, the sensitivity of the camera chip is known to be wavelength-dependent. For RGB cameras, it is thus common practice to estimate the camera response function for each color channel separately (cf e.g. [18,19]). Following this line, we thus decided to also compute individual CRF estimates for the FITC and the PE data. Accordingly, we created two matrices Z sample_FITC and Z sample_PE by sampling from all nine fields of the FOV (cf figure 1) for the CD45RA FITC and CD8 PE biomarker, respectively. 4 Then, we solved the classical Debevec-Malik model introduced in [18] and stated in (7) or, alternatively, (A5) and (A6), respectively, for these data matrices, where we chose the regularization parameter α just large enough to enforce a smooth and strictly monotonically increasing course of the CRF. The thus obtained estimates for the natural logarithm of the inverse CRF based on 2,949,120 uniformly sampled pixels, that is one-ninth of the pixels in the entire FOV, are depicted in figure 3. Looking at these function plots, we see that these estimates meet our assumptions in so far as they both are smooth, strictly monotonically increasing, nonlinear and satisfy the side constraint (A3). Moreover, we note that the estimates for the FITC and the PE fluorochrome are reasonably similar, yet recognizably different, which is also in line with our expectations. In the next step, we therefore used the thus obtained CRF estimates to calculate HDR radiance maps from the exposure sequences depicted in figure 2 to illustrate the performance of the approach proposed in this paper for fluorescence microscopy data.
In order to create a standard of comparison, we first applied the classical approach stated in (9) to the CD45RA FITC and CD8 PE 16-bit LDR images shown in figure 2, where we chose the weight function according to Debevec and Malik's renowned paper [18] figure 4. Subsequently, we computed HDR radiance maps for exactly the same data by means of the proposed variational approach as given in (25).
Since the motivation for conceiving this novel HDR radiance map reconstruction method was to account for noise, or more precisely the unavoidable signal-dependent photon shot noise, in the data, we chose the weight function in (25) according to (22) with σ = 0.05. Plots of the respective weight functions for the CD45RA and CD8 biomarker that are based on the CRF estimates for the FITC and the PE fluorochromes, respectively, are provided in the middle and the right image of figure 4. In an attempt to highlight the effects of the derived Kullback-Leibler divergence-type data term and the total variation regularization term, we ran two variants of the algorithm: first, we chose α = 10 −16 in order to restrict the influence of the regularization term to a minimum, then we chose the regularization parameter α such that we obtained the visually most convincing denoised HDR radiance map. For both, the CD45RA FITC and the CD8 PE data, this was the case for α = 2.5, which illustrates the robustness of our approach. The resulting HDR radiance maps obtained from the CD45RA FITC exposure sequence in the first row of figure 2 are shown in the second, fourth and sixth row of figures 5 and 6, respectively, the ones for the CD8 PE images in the second row of figure 2 can be found in the second, fourth and sixth row of figures 7 and 8, respectively. All these results contain a green-framed rectangle that marks a region of interest, close-ups of which are provided in the respective next row of the same figure. Please note that the subtle differences in the results can be best viewed on a high-definition digital monitor. Finally, we would like to point out that for visualization purposes all HDR results in this paper are depicted on a logarithmic scale.
If we thus take a look at the images shown in figures 5 to 8, we first of all recognize that all HDR radiance maps seem to match the given data and to generally speaking resemble each other, even though the reconstructions obtained by means of our approach (middle and right) convey the impression to be slightly brighter than the standard result (left). Since the latter observation holds for both choices of the regularization parameter, it appears reasonable to attribute this effect to the Kullback-Leibler divergence-type data term in our variational model. In light of the lack of a ground truth, it can unfortunately not be definitely judged, which approach gives the more accurate reconstruction of the true radiance in the scene and it thus seems only safe to conclude that the proposed variational reconstruction technique can compete with Debevec and Malik's [18] well-known method.  Hence, let us proceed to the detail views in rows three to six of figures 5 to 8, respectively. To start with, these close-ups substantiate that all HDR results combine valuable information from the corresponding entire exposure sequence into a single image. Indeed, looking at figures 5 and 7 and comparing the images in row four and six against the images in row three and five, respectively, we see that the HDR radiance maps provide information that is only contained in the two shorter exposed images but not in the image with the longest exposure time. More precisely, with regard to figure 5, we see that there is a big cell in the center of the region of interest (ROI) that is totally overexposed in the brightest image of the exposure sequence. However, just as in the two images corresponding to the shorter exposure times, the boundary as well as the brightness distribution of this cell are nicely visible in all three HDR reconstructions. Referring to figure 7, we can make a very similar observation. Again, there is a cell in the center of the ROI that appears as an area of pure white in the image with the longest exposure time. Looking closely at the two shorter exposed images, we can see, however, that there is a dark spot in the upper half of this cell that is also discernible in all HDR reconstructions in row four and six, respectively. Vice versa, if we look at figures 6 and 8 and compare the images in row four and six with the image crops of the exposure sequence in row three and five, respectively, we realize that the HDR radiance maps at the same time also provide information that was only captured in the longest exposed image of the sequence. And again, this observation applies to the CD45RA FITC and the CD8 PE case, since all HDR reconstructions in row four and six of figure 6 reveal a snake-like structure in the dark area in the lower half of the ROI, that could only be identified on close inspection of the brightest image in row three and five, respectively, and the same holds true for the cell cluster in the center of the ROI in figure 8. With regard to the latter ROI, it is in particular also worth mentioning that the HDR reconstructions in fact reveal additional cell structures in the background that were not even discernible in the brightest image of the exposure sequence. We would like to end our discussion of these results for experimental fluorescence microscopy data by pointing out that a comparison of the HDR reconstructions in each even row of figures 5 to 8 allows for the following conclusion: for a suitable choice of the regularization parameter, the proposed variational approach is indeed able to combine HDR radiance reconstruction with edge−preserving denoising, the latter being reflected by a smoother appearance of the interior of cells that is not accompanied by a blurring of the cell contours in the images in the right column.

Conclusion and outlook
In this paper, we have introduced a variational approach for high dynamic range (HDR) imaging that explicitly accounts for the Poisson statistics of the unavoidable signal-dependent photon shot noise in the data and thus seems particularly suited for low-light applications such as fluorescence microscopy. Based on a review of standard approaches in the literature, we have motivated and then derived a weighted Kullback-Leibler divergence-type data fidelity term. Complementing the latter with the total variation regularization, we obtained a variational model that simultaneously accomplishes HDR radiance reconstruction and edge-preserving denoising, which has been demonstrated by results for real fluorescence microscopy exposure sequences acquired with the recently developed MACSima TM Imaging Platform. As customary in the context of convex optimization in imaging, we also provide the associated saddle-point and dual formulations and present a possible numerical implementation in the appendix of this paper.
As regards forthcoming research, the variational structure of the proposed approach offers us a high-degree of flexibility. Acknowledging that it is nowadays general consensus that in the large majority of cases the complexity of natural images can only partly be dealt with by handcrafted regularization functionals, a very interesting idea to investigate in the future is to further improve the denoising performance of our approach by replacing the total variation regularization term by a learned state-of-the-art regularizer such as the very recently proposed total deep variation [38,39]. Moreover, from an application point of view, another interesting direction for future research might be HDR fluorescence microscopy of living objects. While there are examples where living objects move very little within the exposure sequence and the algorithm can be straightforwardly applied, the more interesting and challenging situation will include object movement (like cell migration), in which case a pixel by pixel calculation would not be possible and an additional motion correction step may be required.
In order to render the estimation of the camera response from minimization of this distance functional more stable against noise in the data, Debevec and Malik [18] suggested to incorporate a smoothness assumption with respect to h. More precisely, they suggested to penalize the second derivative of h by squaring its central differences approximation and taking the sum over all possible intensity values, leading to a regularized minimization problem of the form with h and E denoting an R-and an S-dimensional vector, respectively, where typically R < < S, and α > 0. However, a deficiency of this model is that h and E can only be determined up to an additive constant. To overcome this issue, Debevec and Malik [18] proposed to restrict the solution space by additionally requiring h to satisfy the constraint Finally, the authors suggested to once more modify the minimization problem in (A2) by adding a weight function   w: 0,1 [ ]that allows to emphasize compliance with the aforementioned assumptions for medium intensity values while giving lower weight to under-and overexposed pixels, since the information provided by such pixels seems less trustable. Specifically, they proposed the use of a simple hat function, which for Î  i is given by denoting the characteristic function of the set . In other words, the desired estimate is characterized as the minimizer of an energy functional, whose first term ensures consistency with the given data and whose remaining parts promote a certain set of favorable properties the minimizer should satisfy, where the extent to which each objective influences the result is controlled by the parameter α. Viewed from this perspective, Debevec and Malik's approach [18] to problem one in our list hence takes the form of a variational method.
To complete the current paragraph, we would like to point out that the assumptions underlying the presented approach only pose relatively mild restrictions on the space of camera response estimates, in particular if compared against different parametric models in the literature such as the three parameter model [17] ( f (H) = α + βH γ ) or the polynomial model [19] ( f (H) = ∑ j c j H j ). With regard to the list of renowned literature cited in this paper's introduction, only the approach of Robertson et al [20] seems even a little more general at first sight, since in this paper the authors make use of an equivalent set of assumptions as Debevec and Malik [18] with the exception of the smoothness requirement. In concrete terms, the approach in [20] is also based on the idea that the squared 2-norm between the two sides of (5) should be small. However, rather than considering the identity in the last line of (5), Robertson et al [20] draw on the formulation in the second line. As a consequence, they propose a model of the form å å ]again denotes a weight function to grant pixels of medium intensity higher influence on the result than under-and overexposed pixels. For the sake of completeness, we mention that in [20] the authors also suggest to use a different weight function, the choice of which, however, seems less relevant for our current discussion. For us, the key point with respect to the model in (A7) is that Robertson et al [20] refrain from imposing any additional regularity requirement on g. While on the one hand this gives additional freedom to the estimation process, a potentially problematic consequence is that small variations in the data such as random noise-induced oscillations might have a substantial effect on the shape of the resulting camera response. One might argue, of course, that this will be less of a problem when the ratio of sample points to possible intensity values S/R is sufficiently high and the intensity values of the S pixels are sufficiently equally distributed over the range of possible intensity values  , since in this case the effects of random oscillations can be expected to cancel each other out. In general, we can however not expect this to hold and in the context of fluorescence microscopy, where signal-to-noise ratios are rather low, exposure sequences typically consist of a rather limited number of pictures and images commonly have high bit depths, this situation seems particularly unlikely. Nevertheless, on the basis of these considerations, we can make the following interesting observation: if we revert to the model in (A6) and adopt the perspective of the theory of regularization methods for inverse problems (cf e.g. [40,41]), it is a well-known fact that the regularization parameter α should be chosen in dependence on the data situation, more precisely such that α → 0 for increasing signal-to-noise ratio. So let us from now on assume that the value of α is set according to this rule. Then we infer that in an ideal situation as sketched above, where the noise in the data plays a negligible role, the model in (A6) in the limit amounts to an equivalent formulation of (A7), provided that the weight function in both models is identically chosen. Hence, since even in this ideal situation the model by Debevec and Malik [18] does not seem to perform worse than the one by Robertson et al [20], we, all in all, come to the conclusion that among our list of renowned works, the approach in [18] appears to be the most adequate model for our purposes.
We would like to conclude the current paragraph with the remark that once the CRF has been estimated, it cannot only be applied to the images of the exposure sequence used for the estimation process, but it can be used to recover the radiance underlying any image acquired with the same camera at corresponding settings.
Appendix B. Saddle-point and dual formulations of the proposed variational HDR fluorescence microscopy approach In the context of proper, convex and lower semicontinuous energy functionals containing bounded and linear operators like our objective functional in (25), one is often also interested in the saddle-point and dual formulations of the optimization problem, which we therefore provide in the following. To this end, we first note that setting the optimization problem on the right-hand side of (25) can also be written as where F and G denote proper, convex and lower semicontinuous functionals as follows: x y y y , which in addition first and foremost requires knowledge of the convex conjugate F * of F defined as In the given situation, determining the convex conjugate F * is rather simple, since it is well-known that the convex conjugate of a norm is just the characteristic function of the ball with respect to the corresponding dual norm. As a consequence, we have To conclude the current section, we shall now also derive the corresponding dual formulation      , it is well-known that the corresponding dual mapping is just the pointwise Euclidean projection onto the 2-ball (cf e.g. [44] As an indicator of convergence of our PDHG algorithm we use the primal-dual gap that is defined as the difference of the primal problem in (B2) and the corresponding dual formulation (B8) (cf. [30], Section 3.5), i.e., A summary of the entire numerical optimization routine is provided in Algorithm 1. We conclude the current section with the remark that we use forward finite differences with Neumann boundary conditions to discretize the image gradient    , 1 2 ( ) and backward finite differences to discretize the divergence in order to keep the adjoint structure.