Forced Measurement of Astronomical Sources at Low Signal to Noise

We propose a modified moment matching algorithm to avoid catastrophic failures for sources with a low signal to noise ratio (SNR). The proposed modifications include a method to eliminate non-physical negative pixel values and a forced single iteration with an initial guess derived from co-add measurements when iterative methods are unstable. We correct for all biases in measurements introduced by the method. We find that the proposed modifications allow the algorithm to avoid catastrophic failures in nearly 100\% of the cases, especially at low signal to noise ratio. Additionally, with a reasonable guess from co-add measurements, the algorithm measures the flux, centroid, size, shape and ellipticity with bias statistically consistent with zero. We show the proposed method allows us to measure sources seven times fainter than traditional methods when applied to images obtained from WIYN-ODI. We also present a scheme to find uncertainties in measurements when using the new method to measure astronomical sources.


INTRODUCTION
In optical astronomy a series of short exposures are often taken to mitigate against changing observing conditions, avoid saturation, and to study time variation.Stacking or co-adding images has been used to measure extremely faint sources not detectable in individual images.Significant amount of work has been done on how to best coadd images (Fischer & Kochanski 1994;Hook & Lucy 1994;Bertin et al. 2002;Gruen et al. 2014;Zackay & Ofek 2017).However, co-adds suffer from various drawbacks such as averaging of systematics, loss of information, and introduction of artifacts.The presence of clouds, different seeing conditions, as well as different science requirements make weighting of individual frames in co-adds challenging (Zackay & Ofek 2017).This is especially a problem for large scale surveys that use multiple visits like SDSS (Gunn & Weinberg 1994;York et al. 2000;Stoughton et al. 2002), PanSTARRS (Kaiser et al. 2002;Chambers et al. 2016) Ivezić et al. 2019).Over the course of several years of operation, these surveys have produced or will produce hundreds to thousands of images of any specific region of the sky.Survey images are obtained under various conditions of seeing, clouds, airmass and background noise.Traditionally these images have been co-added to produce a final image on which all source detection and measurements are done (Annis et al. 2014;Waters et al. 2020) However, simply co-adding images represents a significant loss of information.For instance, combining an image with a large seeing or ill-behaved point spread function (PSF) with an image with particularly good seeing degrades the effective PSF significantly.
In theory, detecting the sources in the co-add and measuring them in individual exposures can improve measurements significantly.This is common for photometry where the zero points are are determined for each image individually to take into account the significant variation from one image to another (Stoughton et al. 2002;Skrutskie et al. 2006).Ivezić et al. (2007) was able to produce extremely accurate photometry from SDSS by taking into account systematic effects in individual images.Sako et al. (2008) used forced positional photometry on individual SDSS images to place upper limits on the magnitude of pre-supernovae candidates.
Similarly for astrometry, it has been shown using multiple short exposures provides better astrometric estimates.In a long exposure image, near earth asteroids appear as streaks.Inferring average position of a source from a streak is inherently problematic and introduces large astrometric errors.The "shift-and-stack" method was first proposed by Tyson et al. (1992) and later used by Cochran et al. (1995) and Gladman et al. (1998) to detect Kuiper Belt and Trans-Neptunian objects respectively.More recently Zhai et al. (2014) and Shao et al. (2014) use a similar technique called synthetic tracking, where each individual exposure is shifted and added such that the asteroids appears as point sources while stars appear as streaks in the final co-add.The asteroid position can then be measured with much better accuracy.To measure the relative position of the asteroid with respect to stars, one can average the position on stars in each individual exposure rather than measuring the streak stars in the co-added image and hence avoid measuring streaks altogether.
This approach of using multiple short exposures is also useful in gravitational lensing studies where measuring shapes of astronomical sources are of paramount importance (Tyson et al. 2008;Jee & Tyson 2011;Mandelbaum 2018).This method also helps to take into account any chip based systematic effects and discard/de-weight periods of bad observing conditions.This idea has been used in CFHTLenS (Miller et al. 2013) and DES (Zuntz et al. 2018).Thus, in theory all of the quantities you can measure (flux, position, shape) are enhanced by individual exposures.
The major challenge of using individual exposures is that the sources are much fainter than the co-add.Faint sources can be challenging to measure.Several methods have been proposed to measure faint sources (Hogg & Turner 1998;Lang et al. 2009;Teeninga et al. 2016;Lindroos et al. 2016).Some of the methods such as forced photometry (Stoughton et al. 2002;Sako et al. 2008;Saglia et al. 2012;Lang et al. 2016) are simple but are limited to photometry.Other algorithms propose complicated methods to achieve desired results (Hogg & Turner 1998;Teeninga et al. 2016).
In this paper, we propose a moment matching method which can measure the flux, centroid, size and ellipticity of sources to arbitrarily low SNR.The paper is organized as follows.In Section 2, we explain the moment matching algorithm in detail.We also give expression for errors in measurement when using this algorithm.In the Section 3, we describe two novel modifications to use this algorithm for arbitrarily low SNR.Finally, in Section 4 we show that our method works well with simulations in presence of noise.Performance of this method on point sources in data obtained from WIYN-ODI is also shown.

MOMENT MATCHING ALGORITHM
In this section, we describe the moment matching algorithm used.This is similar to the one proposed by Kaiser et al. (1995) and later modified by Bernstein & Jarvis (2002).We use an adaptive moment matching method that uses an elliptical Gaussian as weight.The shape, size, and centroid of the elliptical Gaussian is iteratively calculated from the image itself.The choice of elliptical Gaussian profile is not unique and can be replaced with other popular profiles such as de Vaucouleurs and pure exponential.It has been shown a weighted centroiding method similar to this has optimal performance when compared to other existing methods (Thomas et al. 2006).Several different method of centroiding can be found in literature (King 1983;Stone 1989;Mighell 2005;Thomas et al. 2006).
We define I(x,y) as the input image or 2-D array, µ x is the centroid position along x-axis, µ y is the centroid position along y-axis, σ 2 xx is the variance along x-axis, σ 2 yy is the variance along y-axis, σ 2 xy is the variance along y=x line, w(x, y) is the weight value at pixel (x,y) and B is the background level.We also define µ x = µ y = 0 at the center of the image.
Initial guess values can be specified by the user.This is important for the forced measurement method described in the next section, since it relies on a reasonably good initial guess obtained from the co-add.Unless specified, the default values are σ 2 xx = 9 , σ 2 yy = 9 , σ 2 xy = 0, µ x = 0 and µ y = 0.The background, B, is initially set to median value of all pixels in I(x,y).Next, the 2-D elliptical Gaussian weight w(x,y) is calculated using the values of where The following quantities are then calculated and updated iteratively x,y x[I(x, y) − B]w(x, y) x,y [I(x, y) − B]w(x, y) (3)

B =
x,y We repeat this until the values of both σ 2 xx and σ 2 yy change by < 10 −3 .This is defined as convergence.It is important to quantify the errors in measurement of flux, centroid, size and ellipticity when using this algorithm.We do this in a semi-empirical way using simulations.We simulate a large number of sources with various background, flux and ellipticity and measure the uncertainty in these as a function of flux, size, background, size and elliptcity.The range of flux is 1 − 10 7 counts, size is 10 − 20 pixels and elliptcity is drawn from a Gaussian distribution centered at 0.1 with a width of 0.2.Elliptcity values less than 0 are rejected.
Simulating sources is done in two steps.First we simulate the background and then the object is added.To simulate the background, each pixel is assigned a random real number drawn from a Gaussian distribution of mean B and variance B. To simulate an object of flux N, a random number is drawn from a Gaussian distribution with mean N and variance N. Say N 1 is the total number of photons received from the object.Next a probability distribution function is created with the same 2D shape as the object and N 1 photons are drawn from this distribution to simulate the source.We use multivariate normal distribution method in numpy (Harris et al. 2020) to perform this.Next we run the above mentioned moment matching algorithm on the source for a single iteration with the correct guess shape.This allows us to isolate the errors in various measured parameters, even when one has almost perfect knowledge of the source.
The error when measuring image parameters, arise mainly from photon noise and background.Hence we simulate a million sources with no background to recover the error due to photon noise.To recover the background component we add simulate sources with background, find the error and subtract the error due to photon noise.An alternate way to find the second component is to simulate sources without photon noise, but with background and find the errors.Both methods give consistent results.
Dimensional analysis is used as a sanity check.For the image parameters being measured, flux is a dimensionless quantity denoted by N. Background is in units or 1/pixel 2 and is denoted by B .The variance, namely σ 2 xx , σ 2 yy and σ 2 xy have units of pixel 2 .Also we define A = π σ 2 xx σ 2 yy − σ 4 xy as the area of the object and S = σ 2 xx + σ 2 yy as the size of the object.

Flux
There are two significant sources of error in flux measurements: 1) a Poisson component due to source photons and 2) a Poisson component due to background photons.It is well known the noise in flux due to photon noise is √ N , where N is the number of detector counts.However, the algorithm itself can only expect to asymptotically achieve this in the presence of noise from background.Next, using the above mentioned method, we find the error due to background is √ 4πS 2 B, where S denotes size and B denotes background.This can derived considering the expression for N in Section 2.

N =
x,y (I(x, y) − B)w(x, y) x,y w 2 (x, y) It can be shown that x,y w 2 (x, y) = 1/(4πS 2 ).We can write the previous equation as x,y Assuming the uncertainty arises only due to background photons, the uncertainty in N then is x,y where σ 2 (x, y)is the uncertainty in counts of pixel (x,y).Hence σ 2 (x, y) = B for all pixel and this can be further simplified to After simplification, the final expression for error due to background is This expression is identical to √ 4πbσ 2 expression obtained by Bernstein & Jarvis (2002) where b is the background counts per pixel and σ is the size of the source.The background Poisson component depends on the weight function.With our choice the error is √ 4πS 2 B. This appears to be optimal.Hence we introduce a constant The error in flux can be written as Using this, we can define SNR as In Figure 1, we show this expressions accurately predicts errors in flux.The black circle which denotes ϵ 2 f lux /N 2 closely follows the 1/N line, second from the top.Deviations seen in Figure 1 are primarily due to algorithmic limitation of not being able to characterize the background perfectly.

Size
The errors in size have two components as well namely intrinsic and background.The uncertainty in size can be written as where N is the flux, S denotes the size of the source, B is background and K is defined in Equation 15.The first component is due the intrinsic uncertainty arising from photon noise and the second component is due to background.The reason we use size(S) over conventional σ, i.e spread along each axes, is using size is more accurate than using σ in cases of elliptical objects.
Shown in Figure 1, we find the light green circles which denote ϵ 2 size /S 4 closely follow 1/2N line, third from the top We also write down the formula for errors in sigma parameters as where N is the number of counts, B is the background and S is the effective size.The dark blue circle and dark blue triangle in Figure 1

Centroid
Once again the errors in centroid arise from background and photon noise.According to Thomas (2004) centroid uncertainty due to photon noise is A/(πN ) where A is the area and N is the number of counts.We did recover this from our experiments as well.Extending this idea to include background we have where N is the flux, S denotes the size of the source and B is background.This is also equivalent to errors presented by King (1983) and Lang et al. (2009).We decide not to use σ i.e spread along each axes in this formula and instead use size (S) for the same reason as before.In Figure 1, ϵ 2 µx /S 2 is shown in light blue circle and seem to follow the 1/N line, second from the top.The y component is shown in light blue triangle and follow the same line.

Ellipticity
The errors in ellipticity is In Figure 1, ϵ 2 e1 is shown in red circle and ϵ 2 e2 is shown in red triangle.Both follow the 1/N black line (second from the top).The two components of elliptcity are e 1 and e 2 and are defined as Total elliptcity is obtained by adding e 1 and e 2 in quadrature.The errors quoted above is when we let the algorithm is allowed to converge.We once again emphasize the deviation of points in Figure 1 from the exact expression is primarily due to the limitation of our algorithm.We found when we supply the algorithm with the true values as initial guess and run for 1 iteration the errors would decrease significantly.The form of the errors would remain same, but the errors are reduced by a factor of 2. The only exception to this rule is error in flux where the errors remain unchanged.We also found slight dependence of errors on elliptcity in some cases.For example it is not hard to see when e 1 > 0 , error in µ x will be larger than error in µ y .These errors are significant at e > 0.5.Only a small fraction of astronomical sources have such high ellipticity.
Figure 1.This plot shows the intrinsic errors in flux, centroid , size and elliptcity.A wide range of sizes, counts and elliptcity has been used to make this plot.Background was set to 0. To produce each point 10 6 samples were drawn and the error in measurement was recorded.Forced indicates only a single iteration was performed with the true values as guess.The black lines from the top are 2/N , 1/N , 1/2N , 1/4N and 1/8N respectively.The black circle which denotes ϵ 2 f lux /N 2 closely follows the 1/N line.Light green circles which denote ϵ 2 size /S 4 closely follow 1/2N line.In case of centroid, ϵ 2 µx /S 2 is shown in light blue circle and seem to follow the 1/N line.For ellipticity, ϵ 2 e 1 is shown in red circle and ϵ 2 e 2 is shown in red triangle.Both follow the 1/N line.It is clearly from the plot that the errors in all components except flux is approximately twice as large when convergence is allowed.The large deviation in fractional flux error when the number of counts is extremely low is likely due to breakdown of Gaussian approximation.

Generalization of Forced Photometry to all measurements
We found the above algorithm fails at arbitrarily low SNRs.This is expected, but completely limits the ability to make individual frame measurements.The failure starts as rare events at SNR ≲ 10 and climbs up to 100% failure rate at SNR ≤ 6 as shown in Figure 7.This poses a huge problem when trying to make measurement on individual frames since a vast majority of the sources are expected to be extremely faint in individual frames.To solve this we generalize a technique originally developed for forced photometry (Stoughton et al. 2002;Lupton et al. 2005;Sako et al. 2008;Saglia et al. 2012;Lang et al. 2016).We call this method forced measurement to generalize it to photometry, astrometry and size/shape measurements.
Forced photomtery is a method of measuring flux when reliable measurements cannot be made.Forced photometry uses the position of the sources detected, often in co-add or other data, to then guide where flux measurements can be made in individual frames.This can be effective even when a sources cannot be detected in individual frame.There are several different types of photometry such as aperture photometry (Stetson 1987), PSF photometry and Model Photometry (Lupton et al. 2001;Skrutskie et al. 2006;Ivezić et al. 2007).Forced photometry can use either of these slightly different methods.Since we use elliptical Gaussian weight for moment matching, we concentrate on Model Photometry which also uses an elliptical Gaussian model.Such a Model Photometry would be equivalent to running the moment matching algorithm for one iteration.
Using the motivation of forced photometry, we note that we can generalize this approach for astrometry and shape measurements.As noted before, if forced photometry, which is equivalent to running the moment matching algorithm for one iteration, produces sensible flux measurements, we expect the other measurements to be sensible as well, albeit noisy.In other words at extremely low SNR, where traditional moment matching algorithm fails to converge we can expect to measure the position and shape of a source by running the moment matching algorithm for one iteration.As is the case with forced photometry, we need the position and shape of the source to perform forced measurement.The position of the source on the sky and its shape (profile) is determined from the co-add or some other deeper image perhaps.We assume in the co-added image the SNR is sufficiently high for the traditional moment matching algorithm to converge.This allows us to obtain a good estimate/guess of flux, shape and centroid in each individual image of the co-add.In individual frames, we create a cutout centered on this position and use the co-add shape as guess (taking into account the necessary PSF correction) to run the algorithm for 1 iteration.The flux obtained using this strategy is identical to using model forced photometry.In addition to flux, this strategy was found to yield fairly accurate values of shape and centroid using the equations for moment matching algorithm in Section 2, as long as our cutouts are fairly centered and the input profile is not too far from the the true profile.In the following discussions we use input profile and guess shape interchangeably.

Truncation Correction
While the above strategy helps us push the limits, even this starts failing at SNR ≲ 5.At such low SNR, by pure chance, σ 2 xx and σ 2 yy yield negative values sometimes.Such a result has no physical significance and the problem gets worse as SNR is decreased.We find pixels with negative residual values after background subtraction cause this issue.It is clear that negative pixels values, after accurate background subtraction, have no physical meaning and arise from Poisson fluctuation.
Consider a single pixel having N counts of which B counts are from background.Background subtracted counts hence is (N − B) ± √ N .To ensure we never end up with negative pixel values, which is un-physical, we take only the positive portion of this distribution and call the median value the true pixel value.When N − B >> 0 the median value is basically same as N − B .However as N − B becomes closer to 0 and then negative, closer to 0 the median value gets.Solving this give us Where N is the pixel value, B is background, µ is the inferred pixel value and erf signifies the error function.Our objective is the calculate µ given N and B. This can be simplified to However this equation is difficult to calculate and is inherently unstable due to the inverse error function.Hence we fit a function to this which is shown 0.477 e 0.928q+0.247q 2 +0.04q 3 when (N − B) > 0 √ 2N 0.477 e 0.551q−0.06q 2 +0.003q 3 when (N − B) ≤ 0 (28) where q = (N − B)/ √ N .Using Equation 28 ensures we always have positive pixel values and hence avoids the problem of negative σ 2 xx or σ 2 yy .By forcing all pixels to have positive values, the size and flux is systematically overestimated.It was found the overestimation was a function of flux, the sigmas of the source and the background.We quantify the overestimation as follows where σ 2 can be replaced with σ 2 xx or σ 2 yy or σ 2 xy and n is the number of iteration.N t refers to the true number of photons received from the source and A t refers to the true area of the source.N m is the measured value of N after performing one iteration of the moment matching algorithm on the truncated image and similarly σ 2 m is the measured shape parameters.B stands for the median background value.The actual form is more subtle and it was found that ).This is shown in Figure 2 for n=1.In practice, knowing N t and A t beforehand is impossible.It was found using the guess values derived from the co-add works well.These are the same initial guess values we use for forced measurement.
The functions f and g is dependent on the number of iterations.This can be attributed to the fact that we forced all the pixels to have positive values using Equation 27and hence σ values used at the start of a new iteration keeps increasing.This causes a runaway effect for most cases except when SNR ≳100.A similar effect is seen for most other parameters such as background and flux.It was found when the SNR is higher than ∼ 100, the algorithm converges in less than 100 iteration.The f and g functions for the 1 iteration case are denoted by f 1 and g 1 .When convergence is allowed, the f and g functions are denoted as f ∞ and g ∞ respectively.The graphs for f 1 and g 1 is shown in Figure 2 .In order to produce these graphs, we simulated 4512 sources with values of flux ranging from 1 − 10 5 counts, background values ranging from 50 − 2000 counts, size ranging from 2 − 6 pixels ellipticity ranging from 0 − 0.5.Each case was run 250 times and we measured the median overestimation is flux and σ.From that we calculate f and g as mentioned in Equation 29 and 30.To make the piece wise linear curve in Figure 2 we divide the x-axis in sections of 0.05 and get the median of all points in that section.We join all such points to obtain the red curve in Figure 2. To produce the black lines we fit a logistic function shown in Equation 31and 32.
f 1 = 4 1 + e 2.5(x−0.7)− 0.8 (31) where The graphs for f ∞ and g ∞ is shown in Figure 3.We follow the same process as before, except we now let the algorithm run until convergence.The condition for convergence was defined as when both σ 2 xx and σ 2 yy changes by < 10 −3 from one iteration to the next.It is clear from the Figure 3 that the factors f ∞ and g ∞ becomes unstable and reaches very high values on the left edge of the graph (low SNR).

Error correction for truncation
In Section 2, we presented the errors in various measured values when using traditional moment matching algorithm until convergence.It was found the errors when using a single iteration of the forced measurement was much smaller.This partly comes from the fact to run single iteration of forced measure, we need to input reasonably good initial guess values.It was found that the errors decrease by a factor, once again as a function of f n /(A t .√ B).We decide to quantify the errors as follows where p N , p σ 2 xx and p µx are the error adjustment factors for flux, σ 2 xx and µ x respectively.Once again we follow the same procedure used to find functions as f 1 and g 1 in order to determine the three different p function.Figure 4 shows p N while Figure 5 shows p σ 2 xx and p µx on the left and right respectively.By using symmetry arguments, in the above equations we can replace µ x with µ y and σ 2 xx can be replaced with σ 2 yy .To get the error equation for σ 2 xy we multiply the right hand side of Equation 34 with 1/2.The approximate logistic forms for p N , p σ 2 xx and p µx were found to be p N = 0.65 1 + e −3(x−0.6)+ 0.45 (36) p µx = 0.7 1 + e −3.1(x−0.6)(38)  In this section we show the characteristics of the 3 different algorithms under different values of SNR: 1) The traditional moment matching algorithm, 2) Forced measurement algorithm when it is run until convergence and 3) Forced measurement algorithm i.e. with a single iteration.For this test, we simulate a 12773 sources with varying size, flux, ellipticities and background.We vary the size from 2 to 5 pixels and flux from 1 to 10 5 counts.Background varies from 50 to 2000 counts.Ellipticity values vary from 0 to 0.7 distributed between both e 1 and e 2 .The distribution of SNR, area and ellipticity of the sources are shown in Figure 6.For each source, we produce 200 instances and apply the 3 different algorithms.For the traditional moment matching algorithm and forced measurement until convergence, we start with guess values mentioned in Section 2. For the forced measurement single iteration algorithm, the true values were supplied as guess.We measure four key parameters the failure rate and the percentage deviation of flux, σ 2 xx and σ 2 xy from the truth.We define failure as when the algorithm produces non sense results such as negative values of σ 2 xx or σ 2 yy or the algorithm crashes.If the σ x or σ y value is larger than the one third the cutout size, we also classify that as failure.The cutout size is 100x100 pixels for all cases.The results are shown in Figure 7, 8 and 9.We see in Figure 7, with the traditional moment matching algorithm the failure rate begins to climb up at SNR ∼ 10.In the range 10 < SNR < 15 , the failure rate is 0% while percentage error in flux is 1% ± 1.1%.In the same range error in σ 2 xx is 0.9% ± 2.3% while percentage error in σ 2 xy is −2.4% ± 15%.At lower SNR the error in flux begins to climb up rapidly and approaches non-sensible values of > 10 4 %.Error in σ 2 xx and σ 2 xy show the same behavior.In the range 0.1 < SNR <1 , the failure rate is 89.5% ± 3.6% and the percentage error in flux is 396% ± 474%.In the same range percentage error in σ 2 xx is −1.9% ± 48% while the percentage error in σ 2 xy is −40% ± 211%.Forced measurement run until convergence is shown in Figure 8.We find this method is able to produce accurate results only at high SNR ≳ 100.This algorithm is unstable because repeated iterations along with distortion correction causes the size the become progressively larger or smaller until it exceeds the effective size of the cutout or becomes smaller than the size of a single pixel.At higher SNR we get only small error in flux and σ.We explored the stability of this method at lower SNR and found using the truncation only once and slightly overestimating the background would make the failure rate comparable to traditional moment matching algorithm.However, the bias correction arising from truncation cannot be obtained unless the true values are known.
Forced measurement is show in Figure 9.We find the number of failures drop to 0 for all SNR.In the range 10 < SNR <15, the failure rate is 0% and percentage error in flux is 0.7% ± 1.2%.Percentage error in σ 2 xx in this range is 0.13% ± 0.68% whereas σ 2 xy was recovered with an accuracy of −0.1% ± 3.5%.The performance of this method becomes clear at lower SNR.In the range 0.1 < SNR <1, the failure rate is 0% and the percentage error in flux is −1.9% ± 21%.For cases in this range σ 2 xx was recovered with accuracy of 0% ± 0.8% while the percentage error in σ 2 xy was −0.2% ± 4.7%.This is much better than the previous case.We get un-biased measurement with tighter error spread.
Since forced measurement with single iteration showed the best performance, we test this algorithm further.We select 3961 cases with unique combination of flux, size, background and elliptcity such that the SNR lies between 1 and 8.The ranges from which these values are selected are same as before.We made two hundred instances of a single source and measure each instance with forced single iteration using true guess values.We find the median of all two hundred measurements and call this our best estimate from individual image measurements.These are shown as blue points in Figure 10.We also co-add the two hundred instances and run the co-added image through traditional moment matching algorithm.Because of the SNR range selected initially, convergence of the traditional moment matching algorithm on the co-add is guaranteed.These are shown as red dots on the plot.We can see forced measurement performs significantly better.In the range of SNR< 0.1 the accuracy of forced measurement for flux is 0.18% ± 3.9%.The percentage error for σ 2 xx is −0.04% ± 0.8% while for σ 2 xy the percentage error is −0.5% ± 4.9%.It has tighter error properties without bias.
In reality, however, sources are affected differently by atmospheric effects in each exposure.This causes random variations in their flux, centroids and shape (Pier et al. 2003;Chang et al. 2012;Kerber et al. 2016).To simulate this, we follow the same method as before of simulating 200 cutouts for each combination of flux, size, background level and elliptcity.However, in this case we inject 1 pixel error in the centroid and 15% error in all three size variances (σ 2 xx , σ 2 yy , σ 2 xy ) when simulating the source.This simulates the random size and centroid variation expected in multiple exposures.The errors are drawn from a random normal distribution.In addition to these errors we also inject error in our guess parameters.The error in guess flux and σ 2 is 15 %.This effectively introduces additional error in σ 2 and simulates the variation in flux expected in each cutout.The results are shown in Figure 11 .We see the flux calculation with the new method shows a few percent bias which was determined to come from the centroid errors introduced.For cases where log 10 (SNR) <0.1, the percentage error in flux is −2.9% ± 4.1% while for σ 2 xx the error is 0.1% ± 1.6%.Percentage error for σ 2 xy is 0.15% ± 4.8%.In the same SNR range, using co-add to determine the same parameters give an accuracy of 0.7% ± 10% for flux while σ 2 xx has an error of 12.5% ± 22.6%.The percentage .In this plots we show the performance of our modified moment matching algorithm run until convergence for the same range of cases as Figure 7.We find that convergence is consistent only when SNR ≳ 100.This shows forced measurement does not perform well when applied iteratively, unless the source is at extremely high SNR ≳ 100 error in σ 2 xy is 0% ± 185%.The shape parameters obtained from forced measurement (σ 2 xx and σ 2 xy ) show no bias.All the quantities have significantly tighter error compared to measurements using traditional moment matching method on the co-add.In Figure 12 we show the centroid measurements for the same case.We find no presence of bias and tighter error properties when using forced measurement.
Finally we show the performance of this method in presence of bias since some level of bias in real use case is to be expected.For this test, we give guess flux, σ 2 xx , σ 2 yy and σ 2 xy that are 10% larger than the true values.For each of the 200 cutouts simulated, we once again introduce 1 pixel error in centroid along each axis and 10% random errors in all three shape σ to simulate the random variations expected in each cutout.We also inject 10% random errors in guess flux and all three shape parameters.This methods is exactly same as the previous case, except for the level of errors used and the additional bias component for the guess parameters.The results are shown in Figure 13.For the cases with log 10 (SNR)< 0.1 using forced measurement method gives 5% ± 4.3% error in flux.The value of σ 2 xx can be determined with accuracy of 10.4% ± 1.3% and σ 2 xy can be determined with accuracy of 10.5% ± 4.8%.In the same SNR range, using traditional moment matching on the co-add to determine the same parameters give an accuracy of 1.7% ± 9.9% for flux.The error in σ 2 xx is 16.1% ± 26% and the error in σ 2 xy is 18.6% ± 177%.Flux values from the forced measurement is better than the initial biased guess while σ 2 xx and σ 2 xy are measured with at the same level of input bias.
For σ 2 xx and σ 2 xy at higher SNR, forced measurement produces slightly worse results than the guess.However this is not a huge issue since at higher SNR, the shape can be determined from the co-add with much better accuracy.This can then be used to construct a better guess value and improve forced measurement.In the range 0.75 < log 10 (SNR) < 0.85 using the proposed method, percentage error in flux is 4.8%±1.8%.In this range the error in σ 2 xx is 11.9%±2.1% while the error in σ 2 xy is 10.6% ± 4.7%.In the same SNR range, using traditional moment matching on the co-add to determine the same parameters give an accuracy of 0% ± 1.8% when determining flux and an error of 12.1% ± 10% when determining σ 2 xx .The percentage error for σ 2 xy is 0.15% ± 29%.

Tests On WIYN-ODI data
In this section we show the characteristics of three different algorithms under different values of SNR: 1) the traditional moment matching algorithm, 2) forced measurement without truncation and 3) forced measurement with truncation and bias correction.We note flux obtained from forced measurement without truncation is identical to forced photometry.The data for this test has been obtained using the ODI instrument on WIYN 3.5m as a part of a weak lesning analysis program.The ODI consists of 30 CCDs arranged in a 5x6 pattern with a pixel scale of 0.11 ′′ per pixel.The seeing conditions during data acquisition varied between 0.6 ′′ to 3 ′′ with a median value of 1 ′′ .All exposures are 60s long and were obtained using one of the 5 wide-band photometric filters available on ODI, namely 'u', 'g', 'r' , 'i' and 'z'.The complete details of this data along with the weak lensing analysis will be published in a follow up paper.We use SWarp (Bertin et al. 2002) for co-adding 124 'i' band and 144 'r' band images.For co-adding, it was found using weight is optimal, where Z is the zero point of the image, S is the seeing and σ 2 b is the background variance.The co-added images were visually inspected and several regions of artifacts were identified.These regions were masked from source detection and constitutes less than 2% area of the co-added image.Co-adding is also done individually for the 'i' and 'r' band.Source detection is done using SExtractor (Bertin & Arnouts 1996) on the 'i' + 'r' co-added image with Figure 10.The red points show the co-add measurements whereas blue points show the measurements obtained from the individual images.The x axis is SNR of individual images.When using forced measurement, we do not find any bias.The graphs clearly show the better error properties obtained when using this method 'DETECT THRESH' and 'ANALYSIS THRESH' set to 0.8 and 'DETECT MINAREA' set to 5 pixels.After source detection we measure the detected sources in the 'i' band co-add using the traditional moment matching algorithm.The process for making optimal cutout in the co-add in order to reduce light contamination form nearby sources is non-trivial and is described in the follow up paper.However, it was found for a vast majority of the sources, this does not make a significant difference.Using a square cutout based of the size measured from SExtractor is sufficient.
The magnitude vs size graph of all sources as measured in the 'i' band co-add is shown in Figure 14 .The vertical strip corresponds to stars which have a fixed size i.e. the size of PSF but vary in magnitude.For this test we select stars with 'i' band magnitude brightness in the range 17 to 22.5 while the range for size was from 2.8 pixels to 3.4 pixels.These limits were visually determined and is shown by the black bounding box in Figure 14.These sources will henceforth be called test sample.It is clear from the plot that this sample is completely dominated by stars.All sources in Figure 14 were cross matched with Gaia EDR3 (Brown et al. 2021) to determine a clean sample of stars.Sources that were successfully cross matched are used for PSF interpolation and construction of guess values for forced measurement.We note that virtually all the cross matched sources is a subset of the test sample Next we measure sources in 146 'i' band and 151 'r' band images using the three different methods.First, all sources cross matched with Gaia EDR3 and brighter than SNR 15 in a given frame were measured using traditional moment matching algorithm.Next, we measure sources in the test sample.To measures a given source in the individual image, a 70 x 70 pixel cutout at the location of the source was created.If any negative or zero values in the cutout are found, it is discarded.We also reject images with PSF size greater than 7 pixels which correspond to a seeing size of 2.4 ′′ .To obtain the initial guess shape, we perform PSF interpolation using the 10 nearest stars with inverse distance as weight.This method of PSF interpolation was found to perform well (Gentile et al. 2012).The guess shape can be written as where σ 2 xx,i is the measured value of the i th star and w(i) is its corresponding weight.Weight w(i) can be further simplified as where d i is distance of the i-th star from the source.Similarly σ 2 yy (guess) and σ 2 xy (guess) was calculated.The interpolated PSF values were used as initial guess for forced measurement since the sources in the test sample are primarily stars.
The guess position of the stars in the individual images was determined from the co-add.Forced measurement also requires guess flux when using truncation and bias correction.The guess flux was determined using ratio of counts of the 10 nearest stars in the individual images and co-add.More specifically, where N guess is the guess flux, N coadd counts of the source in the co-add.N image (star)/N coadd (star) is the the ratio of counts in the image to the co-add of a nearby star which was used to interpolate PSF.The angle bracket indicates the median ratio of ten nearest stars.
In order to compare the performance of the different algorithms, knowledge of ground truth is crucial.This is only feasible in simulations.In real images atmosphere, detector imperfections and a multitude of other factors make it impossible for one to know the ground truth.Hence, we decided to use the guess parameters as ground truth.The results are shown in Figure 15.Shown in blue is traditional moment matching algorithm, red shows forced measurement without truncation and black shows forced measurement with truncation and bias correction.As can be seen in the higher SNR regime, the guess values and the values measured using traditional moment matching algorithm are in excellent agreement.This shows that the guess values are reasonably close approximation to the truth in most cases.The red and black points are slightly shifted along the x-axis for the sake of visual clarity.A measurement is considered to have failed if flux is less than or equal to 0 or the variance of a source along x or y axis is less than 0. Clearly these outcomes are un-physical.Measurements where the measured size is greater than 6 times the size of the cutout is also rejected.In total we have 565,132 valid measurements.The x-axis was divided into 9 uniformly spaced bins in log space.The lowest SNR bin has 4280 samples, the lowest sample size of any bin.The second highest SNR bin has 130,726 samples, the highest sample number of all bins.The median and standard deviation after k=3 sigma clipping is plotted for each of the three methods in each bin.The performance of forced measurement with truncation and bias correction is clearly superior.At SNR ∼ 0.4, when using forced measurement with truncation, the fractional error in flux 0.2 ± 1.0 whereas the fractional error in σ 2 xx is −0.0 ± 0.08.Failure rate is 28%.Using forced measurement without truncation in this SNR range yields a failure rate of 82% and fractional flux error of 2.8 ± 1.8.This shows Figure 13.The red points show the co-add measurements using traditional moment matching algorithm whereas blue points show the median of measurements obtained from the individual images using forced measurement.The x axis is SNR of individual images.We also inject an error of 10% in initial guess flux, σ 2 xx , σ 2 yy and σ 2 xy .When simulating the individual sources error of 10% is injected in flux and σ and error of 1 pixel is introduced in the centroid.All guess parameters except centroid have 10% overbias.This level is shown by the black line.In virtually all case forced measurement produces a better flux estimate.The accuracy at log10(SNR) < 0.1 using forced measurement method is 5%±4.3% for flux, σ 2 xx can be determined with accuracy of 10.4% ± 1.3% and σ 2 xy can be determined with accuracy of 10.5% ± 4.8%.In the same SNR range, using traditional moment matching on the co-add to determine the same parameter gives an accuracy of 1.7% ± 9.9% for flux while the error for σ 2 xx is 16.1% ± 26%.The percentage error for σ 2 xy is 18.6% ± 177%.
the importance of truncation at low SNR.Traditional moment matching algorithm in this SNR range performs worst with a failure rate of 91% and an average fractional flux error of 3.8 ± 2.7.

CONCLUSIONS
In this paper, we have presented the intrinisc and background error components of flux, centroid, size and ellipticity.Next we show traditional moment matching algorithm fails at SNR ≲ 10.To solve this we propose a modification to the existing moment matching algorithm to measure the flux and shapes of sources at arbitrarily low SNR.To do this we apply the truncation (Equation 28) to the background subtracted image and run forced measurement algorithm on this.We find the flux and size parameters are over-estimated.We correct for the over-estimations using the function f 1 and g 1 (Equation 29 and 30) to recover the original parameters.The failure rate of the new method drops to 0 in the range 0.1 < SNR < 1 compared to a failure rate of 89.5% ± 3.6% when using traditional moment matching algorithm.In the same range percentage error in flux, σ 2 xx and σ 2 xy reduces from an average of 396%, −1.9% and −40% to values that are consistent with 0. This method also worked when we allow for convergence, although convergence happens at extremely high SNR ≳ 100.This is not useful and hence we do not consider it further.Finally we show that we can use forced measurement algorithm up to extremely low SNR even in presence of a substantial noise of ∼ 15% in σ 2 , flux and at least 1 pixel error along each axis.Finally, we show in presence of 10% over-bias Figure 14.This plots shows the the magnitude vs size (in pixels) of all sources measured in 'i' band co-add.The vertical strip corresponds to stars.Shown in the black box are all the sources for which individual frame measurements are performed i.e. the test sample.The test sample includes sources with magnitude brightness in the range 17 th and 22.5 th magnitude while the size is limited from 2.8 to 3.4 pixels.These limits were determined visually, but as is clearly seen, almost all of these sources are likely to be stars.
in guess shape and flux along with 10% noise, we are still able to recover the original parameters satisfactorily.In the range log 10 (SNR)< 0.1 using forced measurement percentage error in flux is 5% ± 4.3%.The percentage error in σ 2 xx is 10.4% ± 1.3%, while in the same range error in σ 2 xy is 10.5% ± 4.8%.In the same range using traditional moment matching algorithm on the co-add, we get errors of 1.7% ± 9.9% in flux.The percentage error for σ 2 xx is 16.1% ± 26% whereas the error for σ 2 xy is 18.6% ± 177%.While the level of errors when using forced and traditional moment matching are comparable, we note that inspite of a biased guess value, using forced measurement gives us a better estimate of flux than our initial guess and σ 2 values that are not worse than the biased initial guess.We also get significantly tighter error properties.On applying this method to measure point sources in data obtained from WIYN-ODI, at SNR ∼ 0.4 we get failure rate of of 28% and fractional error in flux is 0.2 ± 1.0.At the same SNR forced photometry produces fractional error in flux of 2.8 ± 1.8 with a failure rate of 82%.Comparing the failure rates, we find the proposed method allows us to measure sources having seven times lower SNR than is possible with traditional methods.In conclusion, this a method to produce better measurements in presence of reasonable estimates in most cases.In extreme cases with significant level of bias and error, it produces some measurements that are no worse than the original guess.Future studies with real images will refine the techniques.The forced measurement respectively follow the 1/N black line, second from the top.Similarly, shown in dark green circles follow the 1/2N black line, third from the top.

Figure 2 .
Figure 2.This figure on left shows how the flux overestimation function f1 is related to other image parameters.On right we show how the σ 2 overestimation function g1 is related to other image parameters.The points in blue shows the 4512 different cases we simulated to find the correction.Red curve is piece-wise linear interpolation.Black line shows the approximate analytical function in Equation 31 and 32.

Figure 3 .
Figure 3.The correction factor f∞ when convergence is allowed is shown on left.The correction factor g∞ when convergence is allowed is shown on right.Clearly both the factor blows up at low SNRs implying convergence is only feasible at fairly high SNR.

Figure 4 .
Figure 4. Error adjustment factor for flux i.e pN is shown.This is factor by which error in flux presented in Section 2.1 needs to be adjusted by.The adjustment factor is needed because the initial guess is the truth and a single iteration is performed when using forced measurement.Red curve is piece-wise linear interpolation.The approximate logistic function is shown in Equation 36.

Figure 5 .
Figure 5. Error adjustment factor for centroid i.e pµ x on left and sigmas i.e p σ 2 xx on right.The two are basically identical but are shown separately for the sake of clarity.Red curve is piece-wise linear interpolation.The approximate logistic function is shown in Equation 37 and 38.

Figure 6 .
Figure 6.(a) The normalized SNR histogram for all 12773 sources is show.These sources are used to gauge the performance of traditional moment matching algorithm, forced measurement algorithm run until convergence and forced measurement algorithm.This graph has been truncated at SNR = 25.Similarly (b) and (c) show the normalized histogram of area and elliptcity of the sources respectively.

Figure 7 .
Figure 7.In this plots we show the performance of traditional moment matching algorithm for 13824 cases.We vary the size from 2 to 5 pixels and flux from 1 to 10 5 counts.Background varies from 50 to 2000 counts.Ellipticity value's vary from 0 to 0.7 distributed between both e1 and e2.The upper left plot shows the failure percent as function of SNR.Upper right plot shows the percent deviation of flux from the true value.The lower left and lower right plots shows the percentage error in σ 2 xx and σ 2 xy as a function of SNR respectively.It is clear that at SNR ≲ 10 catastrophic failures become more common and the errors also become very large as expected, sometimes even exceeding 100 %.In the range 0.1 < SNR <1 , the failure rate is 89.5% ± 3.6% and the percentage error in flux is 396% ± 474%.The error in determining σ 2 xx is−1.9% ± 48% while the percentage error for σ 2xy is and −40% ± 211%.

Figure 8
Figure8.In this plots we show the performance of our modified moment matching algorithm run until convergence for the same range of cases as Figure7.We find that convergence is consistent only when SNR ≳ 100.This shows forced measurement does not perform well when applied iteratively, unless the source is at extremely high SNR ≳ 100

Figure 9 .
Figure9.In this plots we show the performance of forced measurement algorithm for same range of cases as Figure7.Even at extremely low SNR, unlike traditional moment matching algorithm, we manage to avoid catastrophic failures as shown in the first graph.This is crucial for measuring ultra-faint sources.The error does increase at low SNR significantly.In the range 0.1 < SNR <1, the failure rate is 0% and the percentage error in flux is −1.9% ± 21%.In the same range error in σ 2 xx is 0% ± 0.8% and error in σ 2 xy is −0.2% ± 4.7%.

Figure 11 .
Figure 11.The red points show the co-add measurements whereas blue points show the measurements obtained from the individual images.The x axis is SNR of individual images.We also inject an error of 15% in guess flux and σ 2 parameters.When simulating the individual sources error of 15% is injected in flux and σ and error of 1 pixel is introduced in the centroid.We see a slight bias of −2.9% ± 4.1% in flux measurement when using forced measure when log10(SNR) < 0.1 This can be attributed to the injected centroid errors.No bias is present in forced measurement of σ 2 xx and σ 2 xy .Once again the tighter error properties of forced measurement is clear.

Figure 12 .
Figure12.The red points show the x position of the centroid from co-add using traditional moment matching algorithm whereas the blue points show the median of the individual measurements obtained using forced measurement.The simulations are done under same condition as Figure11

Figure 15 .
Figure 15.This plot shows the performance of traditional moment matching algorithm in blue, forced measurement without truncation in red and forced measurement with truncation and bias correction in black.Individual frame measurements were performed in the 146 'i' band and 151 'r' band images for all sources inside the black box shown in Figure 14.The x-axis is divided into 9 equal bins on log space.The points and the error bars shown are the median and standard deviation after sigma clipping in each bin.It is clear our method of forced measurement with bias and truncation correction outperforms in every aspect.The red and black data points have been shifted slightly in x-axis for visual clarity.
, DES (The Dark Energy Survey Collaboration 2005; Dark Energy Survey Collaboration et al. 2016) and Rubin Observatory Legacy Survey of Space and Time (LSST Science Collaboration et al. 2009;