Identifying lensed quasars and measuring their time-delays from unresolved light curves

Identifying multiply imaged quasars is challenging due to their low density in the sky and the limited angular resolution of wide field surveys. We show that multiply imaged quasars can be identified using unresolved light curves, without assuming a light curve template or any prior information. After describing our method, we show using simulations that it can attain high precision and recall when we consider high-quality data with negligible noise well below the variability of the light curves. As the noise level increases to that of the Zwicky Transient Facility (ZTF) telescope, we find that precision can remain close to $100\%$ while recall drops to $\sim 60\%$. We also consider some examples from the Time Delay Challenge 1 (TDC1) and demonstrate that the time delays can be accurately recovered from the joint light curve data in realistic observational scenarios. We further demonstrate our method by applying it to publicly available COSMOGRAIL data of the observed lensed quasar SDSS J1226-0006. We identify the system as a lensed quasar based on the unresolved light curve and estimate a time delay in good agreement with the one measured by COSMOGRAIL using the individual image light curves. The technique shows great potential to identify lensed quasars in wide field imaging surveys, especially the soon to be commissioned Vera Rubin Observatory.


INTRODUCTION
In the field of astronomy and astrophysics, strong gravitational lenses have emerged as a powerful tool for determining several key factors, for example the initial mass function (IMF), the dark matter distribution and its time evolution in the lensing galaxies etc (Mao & Schneider 1998;Metcalf & Madau 2001;Dalal & Kochanek 2002;Pooley et al. 2009;Oguri et al. 2014;Jiménez-Vicente & Mediavilla 2019). For more applications of strong lensing, we refer to the review by Treu (2010). Remarkably, strongly lensed variable sources can also be a source of precision cosmology (Treu & Marshall 2016). Precise time delay measurements can provide cosmological information, such as the value of Hubble constant (H 0 ) (Refsdal & Bondi 1964;Refsdal 1964;Saha et al. 2006;Oguri 2007;Bonvin et al. 2017;Wong et al. 2020;Birrer et al. 2020;Birrer & Treu 2021). The time delay measurement of H 0 is independent of satadru@kasi.re.kr shafieloo@kasi.re.kr all other methods and can thus shed light on the ongoing tension (Verde et al. 2019) between local measurements ) and the H 0 inferred from early universe probes like the cosmic microwave background (CMB) (Planck Collaboration et al. 2020).
So far, time delay cosmography has mainly relied on lensed quasars (QSOs). Hundreds of lensed QSOs have been detected 1 , and monitored 2 and analyzed 3 , making them the primary sources for current time delay cosmography efforts. However, other lensed transients (Oguri 2019) such as supernovae , repeating fast radio bursts (Li et al. 2018) and even gravitational waves (Liao et al. 2017) will become significant contributors in the near future. The first examples of multiply imaged supernovae have already been discovered (Kelly et al. 2015;Goobar et al. 2017;Rodney et al. 2021).
Traditionally, lensed QSOs are detected using either imaging or spectroscopy based methods (e.g., Huchra et al. 1985;Browne et al. 2003;Treu et al. 2018;Lemon et al. 2020). Usually, measuring the time delay requires observations of light curves of the resolved images using high resolution telescopes for several years (e.g., Tewes et al. 2013). Substantial effort has gone into developing methods for extracting the time delays from the resolved image light curves (Press et al. 1992;Pelt et al. 1996;Hirv et al. 2007;Kelly et al. 2009;Hirv et al. 2011;Hojjati et al. 2013;Aghamousa & Shafieloo 2015). However, detecting lensed quasars and measuring the time delays are very challenging tasks. For example, it is often difficult to distinguish binary QSOs from lensed doublyimaged quasars only using imaging or spectroscopy data (Peng et al. 1999;Mortlock et al. 1999). Comparing the similarity of the light curves of close images via measuring the time delay was proposed as a way to distinguish any lensed QSOs from star or QSO pairs (Pindor 2005) as well. However, this approach requires resolved images, and monitoring the image light curves at high angular resolution can be very expensive in terms of telescope time.
An alternative approach consists of using the observed unresolved (joint) light curve for detecting the lensed QSO systems and measuring the time delay subsequently. In this approach, one does not require the systems to be resolved a priori. Therefore, this approach can take advantage of the data from ongoing time domain surveys such as ZTF (Bellm et al. 2019), Pan-STARRS1 (Chambers et al. 2016). The unresolved light curve approach will have broad application when the Vera Rubin Observatory (LSST Science Collaboration et al. 2009 starts the Legacy Survey of Space and Time (LSST). In principle, of course, images with sufficiently large separations can be resolved with advances in image processing techniques, e.g. the image deconvolution method developed by Magain et al. (1998), that allows for optical monitoring of systems in which the image separation was small compared to the seeing (Millon et al. 2020). For example, for LSST, θ P SF = 0.75 arcs, Oguri & Marshall (2010) adopted 2/3θ P SF as the minimum separation that the surveys can resolve. However, a method based on unresolved images can be extremely powerful and complementary. First, the unresolved method would allow the detection of images with separations too small to be identified in ground based imaging. Second, systems that are partially resolved could be difficult to identify in survey data, because deblending algorithms could partition them in a variable number of astronomical objects depending on seeing. A robust algorithm based on unresolved light curves could for example be applied on data that have been convolved to the lowest angular resolution of the time series in order to mitigate deblending issues and increase signal to noise ratio.
In the past couple of decades, several methods have been proposed to detect lensed quasars and to measure their time delays using the joint unresolved light curves (Geiger & Schneider 1996;Shu et al. 2021;Springer & Ofek 2021a,b;Biggio et al. 2021). However, since quasar light curves are highly stochastic and display broad variety, approaches based on forward modelling the light curves under specific assumptions may be powerful but restricted in their application only to the light curves well described by the assumptions, and may thus be incomplete and biased 4 . Therefore, these attempts are likely to be less successful than estimated if the underlying assumptions are not a good description of real light curves, and result in lower precision and purity.
With the goal of attaining a more general method, and hopefully reaching higher completeness, we develop a new technique to identify lensed quasars and measure their time delays from unresolved light curves, without assuming any template or model for the quasar light curves, and without relying on additional information. Our method builds on that originally proposed by Geiger & Schneider (1996), adding a statistical procedure to identify the true lenses and time delays by minimizing fluctuations in the reconstructed light curve 5 The paper is organized as follows. In section 2 we describe the methodology that consists of reconstructing the underlying images from the observed joint light curve. We illustrate a mathematical degeneracy of the problem and then demonstrate a method to break the degeneracy partially and identify the lens systems and measure the time delay. Next, in section 3 we validate our method on simulated data with virtually noiseless data (i.e. in perfect conditions). In section 4 we show that our method works on data with ZTF-like observational noise. In Section 5 we apply our technique to realistic simulations from the Time Delay Challenge (Dobler et al. 2015;Liao et al. 2015, hereafter TDC) 1 (Rung 0 and 1). In section 6, we apply our method to the light curve of the lens systems SDSS J1226-0006 ob-4 The challenges in model agnostic approaches to extract time delays from the unresolved lensed supernova (SN) light curves are quite different since one knows the broad shape of the SN light curves (that follow a mere rise and fall in the flux); see Bag et al. (2021); Denissenya et al. (2022). 5 This minimization is conceptually similar to that applied by the Pelt et al. (1996) method to resolved light curves, although different in implementation.
tained from the COSMOGRAIL database. In section 7 we summarize and conclude the work.

METHODOLOGY: RECONSTRUCTING THE UNDERLYING LIGHT CURVES
For simplicity, let us consider a strong-lensed quasar system that is composed of two images only. Therefore, the observed flux of the lensed system would be sum of the light curves of individual images, (1) Intrinsic light curves of all images can be described by a common function, say f (t), with different magnifications (µ i ) and time delays (t i ), where µ ≡ µ 2 /µ 1 and ∆t ≡ t 2 − t 1 so that Eq. (1) can be recast as Here µ and the ∆t are the magnification and time delay of the image 2 with respect to image 1. Or, in other words, they are the magnification ratio and the time delay between the two images. From Eq. (4) we try to reconstruct the flux of image 1, where we substitute the expression for f 1 (t − ∆t) from the functional form of f 1 (t) in the first line while arriving to the second step. We can continue this substitution to obtain an infinite series expression for f 1 (t) in terms of the combined flux, Up to this point, we do not impose any restriction other than assuming that the lensed system has two images. If µ < 1, the infinite sum converges and, in principle, we can determine the light curve of image 1. Without any loss of generality, let us refer to the image with larger magnification (µ i ) as image 1 and the other as image 2 (this can be easily generalised to more than 2-image systems by sorting the µ i 's). This ensures µ ≡ µ 2 /µ 1 < 1 which in turn guarantees that the sum in Eq. (6) converges. Now a positive (negative) ∆t means image 2 (the image with smaller magnification) arrives later (earlier) in time with respect to image 1. Therefore, given any choices of µ, ∆t, we can reconstruct the light curve of image 1 (which has the larger magnification) using Eq. (6) and then we can calculate f 2 (t) using Eq. (2). Let us call these reconstructed underlying light curves f 1,rec (t), f 2,rec (t). In this approach it is ensured that F rec (t) ≡ f 1,rec (t) + f 2,rec (t) = F (t) (the observed light curve of the lensed system). The key point is that for any choice of (µ, ∆t) one can obtain a unique solution for f 1 (t) that satisfies Eq. (4) exactly 6 .
We use cubic interpolation to obtain the flux in between two observed points as required in the sum in Eq. (6). However, the higher order terms in the summation also require the flux outside the observed range. Since we cannot predict the quasar light curve beyond the observation range (owing to the fact that we know little about the time variability of quasar light curves in general), we assume that F (t) remains flat outside the observation range for simplicity. Below we briefly discuss that this assumption has a negligible effect on the reconstruction at one boundary (as demonstrated by Geiger & Schneider 1996).

Mathematical degeneracies
In this section we demonstrate that the reconstruction technique described above yields mathematically degenerate lensed solutions from any joint light curve. First let us consider high quality data with negligible observational noise, well below the variability of the light curves, so that we have F true (t) = f 1,true (t) + µf 1,true (t − ∆t). Then we reconstruct the light curve of the first image from F true (t) for different choices of the trial {µ try , ∆t try } using Eq. (6).
In order to illustrate the mathematical degeneracy we consider a simulated lensed system composed of two images with the true time delay ∆t true = 24.14 days and the magnification ratio µ true = 0.752. This system is labeled as system 6 in the simulated set that we use for training and validation purpose in the next section. The details of the simulation are given in Section 3. We compare the reconstructed light curves of the underlying images and the combined one (f 1,rec , f 2,rec and F rec ) with the corresponding true light curves (f 1,true , f 2,true and F true ) in Figure 1 for 4 choices of {µ try , ∆t try }. In all 6 Although we test doubly imaged lensed systems in this article for simplicity but this approach can be generalised to more than 2-image systems. the 4 panels, the reconstructed joint light curve matches accurately with the truth, i.e. F rec = F true (notice that the red and the black dashed curves coincide in all the panels). In the top panel, we choose the values of trial magnification ratio and time delay same as the true values, µ try = µ true , ∆t try = ∆t true , finding that the reconstructed image light curves match the corresponding true curves, i.e. f 1,rec ≈ f 1,true and f 2,rec ≈ f 2,true .
In the other panels, f 1,rec and f 2,rec are different from f 1,true and f 2,rec respectively, although F rec (t) = F true (t) is maintained in all 4 panels. Note that flux is shown in arbitrary units in figures throughout the article since only the relative time variability of the light curve matters.
We estimate the precision of the reconstruction of the combined flux by calculating where N D is the number of data points in a light curve (number of observation epochs). For any choice of {µ try , ∆t try } we find that E F ∼ O(10 −15 ) which is basically a manifestation by the numerical round-off error. This clearly tells us that for any choice of {µ try , ∆t try } we always get back the combined flux (F rec (t) = F true (t)) exactly. Using the truths, {µ try , ∆t try } → {µ true , ∆t true }, we can reconstruct the underlying images, f 1,rec → f 1,true , f 2,rec → f 2,true . If the trial {µ try , ∆t try } are different from the true values, the joint light curve is still reconstructed exactly but the light curves of individual images are wrong. We do not discuss the cases in the presence of significant amount of noise/uncertainty in the data yet because even with negligible noise in the data (in perfect conditions) we get degenerate solutions. Later in the article, we include ZTF-like noise in the data while we try to estimate the time delay using a version of this approach, modified to break the degeneracy.
As we assume the observed joint light curve to be flat outside the observed time range, some error is introduced in the reconstruction of first image (f 1,rec ) near one of the boundaries even for true choices: µ try = µ true and ∆t try = ∆t true . E.g. a positive ∆t try introduces maximum error in f 1,rec near the left edge, i.e. for t ≤ t 0 + ∆t try where t 0 is starting time, and the reconstruction gradually becomes more accurate for larger t as more terms from the observed range start to contribute. The same thing happens but from the opposite boundary when ∆t try is negative. This conclusion is demonstrated in Figures 3 and 4 of Geiger & Schneider (1996). Geiger & Schneider (1996) correctly emphasised that one gets infinite solutions due to a mathematical degeneracy and concluded that it is impossible to determine the correct time delay without assuming any additional information such as the intrinsic variability of the quasar light curves. However, as we detail in the next subsection, there is a way to break the degeneracy ( partially), identify the lensed systems, and measure the time delay without requiring any additional information.

Identifying the true solution
In this section we show that the true time delay can be identified by breaking the degeneracy with ∆t try via minimization of fluctuations in the reconstructed light curve.
To illustrate this approach, we first consider the cases with negligible noise. Having demonstrated that the approach works on perfect data we then apply it to cases with substantial observational noise.
For this illustration, we focus on the same example as in previous section (lensed quasar system 6) taken from the training set. We will not use the values of µ true and ∆t true anywhere in the analysis anymore; however, the true values will help us train our algorithm. Since the reconstructed light curve of the second image, i.e. f 2,rec , is just a scaled (by µ try ) and shifted (by ∆t try ) version of the light curve of the first image (f 1,rec ), from now onward we only focus on f 1,rec . Remember that we refer to the brighter (fainter) image as first (second) image.
In Figure 2 we compare the reconstructed light curves of the first/brighter image (f 1,rec ) corresponding to different choices of ∆t try while keeping µ try = µ true fixed. The left panel shows the light curves for the full period of observation, whereas the right panel zooms into the time interval between (100, 300) days. The dashed blue curve in both panels represents f 1,rec corresponding to ∆t try = ∆t true (µ try is already fixed to µ true ) that matches very well the true light curve of the first image f 1,true (Figure 1). Crucially, for values of ∆t try that are different than ∆t true we get more overall fluctuations in the respective f 1,rec curves around their mean values (notice that the red and green curves in the right panel of Figure 2 show more fluctuations than the dashed blue curve). Figure 3 shows the same set of curves as in Figure 2 but for another choice of µ try = 0.5 (which is different from µ true ). Naturally, all the reconstructed light curves now lie above the true light curve since µ try (which measures the contribution of the second image) is smaller than µ true . However, we observe the same feature that f 1,rec 's for ∆t try = ∆t true exhibit more fluctuations than the f 1,rec corresponding to ∆t try = ∆t true . Also, we ob-serve that the f 1,rec 's are simply somewhat scaled according to different choices of µ try .
For ∆t try = ∆t true , one gets minimal fluctuation in the reconstruction of individual image light curves, e.g. f 1,rec . For other trial time delays, ∆t try = ∆t true , f 1,rec 's tend to exhibit more fluctuations. The only exception, as we find later, is ∆t try = 0 for which the reconstructed light curve is just a scaled version of the observed light curve and thus produces a generic minimum in the fluctuations which can be ignored. The effect of choosing different µ try 's appears to be negligible in this regard. We find that all the systems show the above characteristic. This is not unexpected: light curves that are reconstructed using the wrong trial time delay will mix fluxes at different intrinsic times, which are expected to differ more than fluxes at the same intrinsic time. Conceptually, this is the same principle adopted by Pelt et al. (1996) to identify the time delays from spatially resolved light curves.
In conclusion, we can distinguish the true underlying light curve among all the reconstructions by measuring the amount of fluctuations in the reconstructed light curves, e.g. in f 1,rec 's, corresponding to different ∆t try (while keeping µ try fixed). We also tested the effect of trying different choices of µ try . For a fixed µ try , we estimate the amount of fluctuation in f 1,rec corresponding to different ∆t try using the expression where N D is the number of data points (observation epochs). Note that the quantity has the dimension of flux squared. For completeness, we experimented with other metrics (alternative to (8)) for quantifying the fluctuation, e.g. using the aggregated deviation in f 1,rec with respect to its smoothed version. Although different metrics produce consistent results, we do not find any that performs better than the simple formalism given in (8), especially in the presence of substantial amount of noise in the data.
As an illustration we again consider the case of the lensed system 6. We compute f 1,rec for all ∆t try ∈ (−130.0, 130.0) days with a resolution of 0.1 days with a fixed µ try . For each ∆t try , we separately calculate the corresponding value of using Eq. (8). The three panels of Figure 4 show as a function of ∆t try for system 6 for three fixed values of trial magnification ratio, µ try = 0.3, 0.5, 0.7 from left to right. The gray dashed vertical lines in each panel mark ∆t try = ±∆t true . The red and two cyan lines represent the mean and the 1σ, 2σ values of entire curve in each panel. We observe the followings. 10.00 f 1, rec : ∆t try = 24.14 f 1, rec : ∆t try = 10.00 f 1, rec : ∆t try = 40.00 11.50 f 1, rec : ∆t try = 24.14 f 1, rec : ∆t try = 10.00 f 1, rec : ∆t try = 40.00  Figure 2 but with µtry = 0.5 which is different from the µtrue. Again we find that the light curves f1,rec's corresponding to ∆ttry = ∆ttrue have more fluctuations than the true one.
• The (∆t try ) curve looks quite symmetric around ∆t try = 0 and the minima appear at similar |∆t try | values in the positive and the negative domains.
• At ∆t try = 0, which gives essentially 1-image solution (let us call it 'unlensed'), we get a global minimum. This global minimum, implying least fluctuations present in the unlensed solution, always appears at ∆t try = 0 for all the systems (even in the presence of substantial amount of noise in the joint light curve data).
• At ∆t try ≈ ±∆t true (marked by the vertical dashed lines), we notice a pair of prominent secondary minima. One can determine if the system is lensed by identifying this pair of prominent secondary minima in addition to the global minimum.
• The three observations above are valid for all reasonable choices of µ try . Moreover, the positions of the minima (including the pair of secondary minima) do not change significantly with µ try . Since the target secondary minimum in the (∆t try ) curve appears both at ±∆t true , we cannot determine if the time delay is positive or negative from this approach. This is expected since from unresolved data we cannot establish whether the brighter or fainter image is the leading one 7 . Furthermore, the positions of the minima in the (∆t try ) curve are quite insensitive to the choice of µ try , as evident from the three panels of Figure 4. Thus, our method can estimate the time delay and confirm the lensing nature of an unresolved source, but cannot be used to estimate the magnification ra-7 In our convention, the brighter image is tagged as 'first". tio. Since different choices of µ try do not affect the final results, we fix µ try = 0.3 throughout the article 8 .
We conclude this section by introducing a dimensionless version of the fluctuation statistics: where we subtract the mean from (∆t try ) and then scale it with the standard deviation of the whole (∆t try ) curve, σ . Therefore, Σ essentially measures the fluctuations in units of the standard deviation in the (∆t try ) curve. As an illustration, we re-plot the left panel of Figure 4 in terms of Σ as a function of the trial time delay in Figure 5, for system 6 with fixed µ try = 0.3.

VALIDATING THE METHOD ON SIMULATIONS
In this section and in the following one we train and validate the method, proposed in the previous section, on a number of simulated lensed systems. The simulations are taken from the Time Delay Challenge 1 (TDC1). However, for simplicity, we ignore the microlensing effect and only consider doubly-imaged systems. Simulating the observation of a doubly-imaged but unresolved quasar therefore involves three conceptual steps (see Dobler et al. 2015;Liao et al. 2015, for details): 1) The quasar's intrinsic light curve in a given band is generated at the accretion disk of the black hole in an active galactic nucleus (AGN) and modelled by a Damped Random Walk (DRW) process. 2) The foreground lens galaxy causes multiple imaging, leading to two light curves that are offset from the intrinsic light curve (and each other) in both amplitude (due to magnification), and time. 3) Since we assume the images are unresolved, we combine the individual image light curves. In this section we consider high quality light curves with negligible noise and one day cadence. The next section deals with data with realistic noise, at the level one can expect for, e.g., ZTF.
3.1. Testing the algorithm on a training set First we analyse a 'training' set comprising of 10 lensed and 10 unlensed (with just 1-image, hence nolensing) systems with known time delays (and magnification ratios) that have been used for optimizing the algorithm. The set includes a diversity of intrinsic light curves. The normalised fluctuation curves, Σ as function of trial time delay (∆t try ) are shown in the left panel of Figure 6 for all the lensed system for µ try = 0.3. The right panel shows the same Σ(∆t try ) curves for the 10 unlensed systems (again with µ try = 0.3).
Lensed set: For each of the lensed systems we notice that: (i) Σ(∆t try ) is approximately symmetric around ∆t try = 0, (ii) a global minimum is found at ∆t try = 0, (iii) a pair of secondary minima is found at ∆t try ≈ ±∆t true . For 9 out of 10 lensed systems, both the minima at ∆t try ≈ ±∆t true have Σ < −2.0. The only exception is system 4 for which the minima near ∆t try = ±∆t true are somewhat shallower Σ ∼ −1.5. System 4 has by far the small magnification ratio, µ true = 0.14 -the other 9 µ true 's in the training set are distributed between 0.263 (system 8) and 0.92 (system 5). Thus, for system 4, the contribution from the second/fainter image is only 12% of the joint light curve making it very difficult to identify the system as lensed.
Unlensed set: As expected, for the unlensed systems we find global minima at ∆t try = 0. No significant additional minima are found (i.e. deeper than Σ = −2). Therefore, we correctly identify all 10 systems as unlensed.
Selection criteria: In view of the results for the training lensed and unlensed set of systems, we lay down the following conservative selection criteria to detect the lensed systems.  Table 1. Training set with negligible noise. We successfully identify 9 out of 10 systems as lensed, and we estimate the time delay with high accuracy (within the adopted uncertainty corresponding to the sampling of the time delay trial). For system 4 -the one with the lowest magnification ratio -we found minima in the fluctuation parameter (Σ) at the true time delay, but they were not deep enough to pass our stringent selection criteria.
If both the above criteria are met, we identify the system as lensed with the estimated time delay ∆t est ; otherwise it is identified as an unlensed system.
Following the above conservative selection criteria we identify 9 out of 10 lensed systems and none of the 10 unlensed systems as lensed. Thus we have 100% precision 9 and a recall of 90% for the training set considering high quality data with negligible noise.
The estimated time delays for the 9 lensed systems are presented in Table 1 and compared with the corresponding true time delays. The time delay estimations are accurate and consistent with the respective truths within the sampling resolution of the trial time delay (0.1 days). We empirically take the time delay sampling resolution as the uncertainty in the estimate in this case.
We note that the uncertainty estimate is substantially more complex for data with realistic noise, as we describe in the next section. A proper error estimation for this type of statistics requires analysing a large number of simulations which is beyond the scope of the present paper and will be carried out in detail in the future.

Testing the algorithm on a blind set
In order to test the method and prevent experimenter bias, we carried out the following blind test. One coau-   . We identify 10 systems as lensed by detecting a pair of prominent secondary minima (more than 2σ deep) near ∆ttry = ±∆test (ignoring the global minima at ∆ttry = 0). Note that the trial magnification ratio is kept fixed at µtry = 0.3.
thor simulated a set of 20 lensed and unlensed systems without revealing anything but the joint light curves to the rest of the team. The true time delays (of the lensed systems only) are disclosed only after the results have been frozen. By applying the conservative selection criteria introduced above we find 10 lensed systems and 10 unlensed systems. The Σ(∆t try ) curves for the systems we identify as lensed are shown in the left panel of Figure 7. The right panel shows the same for the systems we identify as unlensed. The estimated time delays of the 10 lensed systems are given in Table 2. Building on our findings for the training set we report the sampling resolution (0.1 days) as the uncertainty in the time delay estimation for the blind case too.
After unblinding, we discover that there are 10 lensed and 10 unlensed systems in the blind set. We identify all of the lensed cases correctly and accurately estimate their time delays. Therefore, combining the training and blind sets, we have correctly classified 39 out of 40 systems in total (19 out of 20 lensed cases and all the 20 the unlesed cases), corresponding to 100% precision and 95% recall. This exercise establishes that in perfect conditions (when observational noise is negligible and cadence is much higher than the time delay) one can identify the lensed cases with extreme precision and measure the time delays very accurately.

DEALING WITH NOISY DATA
In this section we introduce realistic noise in the joint light curve data, F obs (t). In the Time Delay Challenge 1 (TDC1) simulations, in order to add photometric noise expected for LSST, an rms photometric uncertainty was drawn first from a Gaussian of mean 0.053 and width 0.016 nanomaggie, and then a noise value was drawn from a Gaussian of width equal to the above rms. In order to validate our method on data quality of smaller aperture telescopes, such as the ongoing ZTF survey, we keep the noise level in our simulations three times larger than that of LSST. In other words, we draw rms photometric uncertainty from a normal distribution with mean 0.159 and width of 0.048 nanomaggie 10 .
We notice that applying the procedure described in the previous section directly to the noisy data leads to significant spurious features and mis-identification. However, we find that the performance of the algorithm is dramatically improved by smoothing the observed light curve prior to the application of our method. In practice, we follow the iterative smoothing algorithm (Shafieloo et al. 2006;Shafieloo 2007;Shafieloo & Clarkson 2010;Aghamousa & Shafieloo 2015), summarized in Appendix A. The choice of smoothing scale is important. Instead of choosing just a single smoothing scale, we choose three smoothing scales (δ = 3.0, 4.0, 5.0 days 11 ) and combine the fluctuation curves corresponding to each δ while keeping N it = 10 (the results are relatively insensitive to the choice of N it ) 12 .
After smoothing, we identify the lensed systems and then estimate the corresponding time delays by tracking the minima of the final normalised fluctuation curve Σ(∆t try ).

Data with ZTF-like noise: training set
We consider the same training lensed and unlensed sets (each has 10 systems) discussed in the noiseless case, adding ZTF-like noise. Let us take the example of system 7 from the lensed set. The smoothed light curve corresponding to the smoothing scales δ = 4.0 days is shown in Figure 8 as a black curve. Smoothed light curves corresponding to δ = 3.0, 5.0 days are quite similar to that for δ = 4.0 days, while smoothed light curve corresponding to lower δ exhibits more fluctuations. Using the smoothed light curves we measure the fluctuation in the respective reconstructions (f 1,rec ) separately using Eq. (8). Then, we calculate the fluctuation estimator Σ(∆t try ), from the sum of the (∆t try ) curves using Eq. (9). Hence, we combine the information from the three smoothing scales.
The fluctuation estimator Σ for system 7 is shown as a function of ∆t try with fixed µ try = 0.3 in Figure 9. Due to our smoothing procedure, the Σ(∆t try ) curve has fewer minima than in the negligible noise case (compare the plot for system 7 in the left panel of Figure 6 with Figure 9). Again, we notice the global minima at ∆t try = 0 which corresponds to unlensed solution and hence can be ignored. In addition, we find a pair of secondary minima at ∆t try = −16.1 and 16.0 with depths Σ = −2.13 and −2.15 respectively. Since the minima satisfy the conservative selection criteria C.2, the system is identified as lensed with estimated time delay ∆t est = 16.05 days which matches very well the true time delay, ∆t true = 15.9 days.
We then analyse all the lensed and unlensed light curves from the training set with ZTF-like noise. We identify 2 out of 10 lensed systems correctly (system 3 and 7) which have secondary minima deeper than Σ = −2.0, thus satisfying the conservative selection criteria C.2. Five other systems also have prominent pairs of secondary minima in their Σ(∆t try ) curves but the minima are not deep enough to satisfy the conservative selection criteria. The left panels of Figure 10 show Σ(∆t try ) for these seven systems exhibiting a prominent pair of secondary minima near the true time delay.
Furthermore, we identify all the unlensed systems correctly using the conservative criteria C.2. The right panels of Figure 10 show Σ(∆t try ) for some unlensed systems. No significant pairs of secondary minima is found.

Relaxed selection criteria
From the left panel of Figure 10 it is evident that for five lensed systems (1, 2, 4, 5, 6) the secondary minima near the true time delay are much deeper than the other minima (false minima) but not deeper than the conservative selection threshold Σ = −2.0. Also, from the training set analyses we observe that for unlensed cases there are many false shallow minima with comparable depths in the final Σ(∆t try ) curves. Based on these findings we formulate a set of relaxed selection criteria aimed at increasing the recall, possibly in return for a decrease in precision. These criteria are illustrative. In practice, the optimal strategy will depend on the data quality and whether recall or precision is the main goal of a search. We leave the tailoring of the criteria to specific needs for future work. Our illustrative relaxed criteria are:  . Apart from the global minima at ∆ttry = 0, we find a pair of prominent minima (more than 2σ deep) at ∆ttry ≈ ±∆ttrue for two lensed systems (system 3 and 7). However, with the relaxed criteria we identify four other systems (1, 2, 5, 6) as highly probably lensed and one system (4) as probably lensed. Furthermore, using the conservative criteria C.2 all unlensed systems are correctly identified as unlensed, while the relaxed criteria gives one false probable case (system 4). The trial magnification ratio is kept fixed at µtry = 0.3.
the global minimum at ∆t try = 0) in the positive and negative domains of ∆t try occur at similar absolute values, i.e. at ∆t try ≈ ±∆t est . If such a pair cannot be found, we identify the system as confirmed unlensed.
2. If both the secondary minima are deeper than 2σ, i.e. both have Σ < −2.0, we detect the system as confirmed lensed. This is the conservative selection criterion introduced previously.
3. One the other hand, if either of the secondary minima is shallower than 1σ, i.e. either has Σ > −1.0, we identify the system as confirmed unlensed.
4. If both the minima are deeper than 1σ, we measure the difference in depths between the secondary minimum and the next deepest minimum ('third minimum') in both positive and negative ∆t try domains separately. If the secondary minima are at least 50% deeper than the third minimum in their respective domains, we consider the system as a probable lensed case. However, depending on the depth of the third (false) minima we classify the following cases: • If there is no minimum other than the pair of secondary minima deeper than Σ = −1.0, we identify the system as highly probable lensed case.
• If the third minimum on either of positive or negative ∆t try domains is also deeper than Σ = −1.0, we identify the system as probable lensed case.
5. If either of the secondary minima is not sufficiently deeper than the third minima, we identify the system as probable unlensed case.
Using these relaxed criteria we identify the two systems -3 and 7 -from the lensed training set as confirmed lensed cases. Four other systems (1, 2, 5, 6) are identified as 'highly probable lensed' whereas system 4 is identified as probably lensed. The remaining three systems are identified as unlensed. Thus we identify all the seven systems from the left panel of Figure 10 as lensed with varying degrees of certainty (confirmed, highly probable and probable cases). The estimated time delays for these systems are given in Table 3. We can see that the estimated time delays are accurate even in the presence of significant uncertainty in the data. Note that the relaxed criteria incorrectly identify only one system among the ten unlensed systems in the training set. Thus, these relaxed criteria produce 87.5% precision with a recall of 70% when we consider ZTF-like noise in the data for the training sets.

Error estimation
The uncertainty in our time delay estimations depends on several factors, such as the intrinsic time variability of the light curves, the noise in data, cadence, number of observation epochs etc. Thus, a proper statistical determination of the uncertainty requires a large number of simulations spanning a range of conditions. Clearly, that is beyond the scope of the present paper, aimed at introducing the method as a proof of concept. However, we found that the estimated time delays are consistent with the truth within 5% for all the cases we study in this article considering reasonable amount of noise in the data (apart from these simulations, that also include the simulations from TDC1, one observed system from the COSMOGRAIL database as described below). Therefore, heuristically 5% seems to be a reasonable initial assessment of the uncertainty of the estimated time delay for the time being. In the future, when applying the method to search for new lenses and measure their time delay, we plan to do a proper error analysis based on large number of simulations, mimicking the actual observing conditions.

False negatives
Next we look deeper into the three lensed systems in the training set (system 8, 9 and 10) that we missed, i.e. the false negative cases. Figure 11 shows the Σ(∆t try ) curves for these systems. Two systems show a pair of secondary minima at ∆t try ≈ ±∆t true (system 8 and 10). However, these pair of minima are either not dominant, or not deep enough. For system 9, the actual time delays are blurred within the central minimum. These three systems (system 8, 9 and 10) have relatively smaller time delays, ∆t true = 13.27, 5.13 and 7.42 days respectively, and are likely lost due to the effect of smoothing.
To gain further insight, in Figure 12 we compare the observed light curves of three lensed systems. The system (3) on the left is identified as lensed, but the middle and right ones are false negatives. Comparing the light curves, the false negative cases are likely due to a combination of (i) noise suppressing the intrinsic fluctuation in the light curves; (ii) short time delays that are confused with the global minimum at ∆t try = 0 due to the noisy data and smoothing process.

Data with ZTF-like noise: blind set
Next we analyse a blind set consisting of 20 light curves with ZTF-like noise level as in the training set.   Table 3. Training set with ZTF-like noise. The estimated time delays are compared with the truths for the seven systems correctly identified as lensed. We identify the top two systems as confirmed lensed. The next four systems, marked with a star, are identified as highly probable lenses. System 4 is identified as probably lensed, i.e. with lesser certainty (marked with double stars). For all the seven systems, the estimated time delays are accurate within a few percent.
The conservative selection criteria identify only one confirmed lensed case with a pair of secondary minima deeper than Σ = −2.0. However, the relaxed criteria detect two additional highly probable lensed systems and  Table 4. Blind set (considering ZTF-like noise in the light curve data): The estimated time delays for the five systems those we identify as lensed in the blind set. We identify the only system at the top as confirmed lensed. Two systems (7,9), marked with a star, are identified as highly probable lensed cases. Another two systems (3 and 13) are identified as probably lensed, i.e. with lesser certainty (marked with double stars).
two additional probable lensed cases. The estimated time delays for these five systems are presented in Table  4. Furthermore, we identify all the 10 unlensed systems correctly, so we do not have any false positive. In conclusion, combining the training and blind sets, the method results in a precision of 92.3% with 60% recall.

APPLYING THE METHOD TO TIME DELAY CHALLENGE 1 (TDC1) SIMULATIONS
The simulated data used so far to establish the method are very well sampled with 1 day cadence. In Appendix B we test the method on simulated data with cadence of 3 days. We found that even with the poorer cadence one can still identify the lensed systems, however the signal in the Σ(∆t try ) curves is reduced, as expected. Nevertheless, in realistic scenarios, quasar light curves are observed seasonally in multiple years offering multiple patches in the data. These patches can be used separately in order to boost the signal in our analysis.
To test the performance on realistic multi-year data, we analyse a number of light curves for double systems taken from the Time Delay Challenge 1 (TDC1) simulations (Dobler et al. 2015;Liao et al. 2015). In this section, we show results for a subset of systems selected to have relatively low noise levels compared to their variability from Rung 0 and Rung 1 of TDC1 as examples. Both rungs have good cadence (3 days on the average with a dispersion of 1 day) and around 400 observation epochs. The systems in Rung 0 are observed for 5 years, while those in Rung 1 are observed for 10 years.
We stress that this paper is meant only to introduce and illustrate the technique. Therefore, for computational reasons we restrict ourselves to a subset of light curves taken from Rung 0 and 1 of TDC1. We leave for future work a systematic exploration of all the systems in TDC1.

TDC1, Rung 0
In Rung 0 the light curves are sampled in roughly 400 observation epochs over a period of 5 campaign years. Let us first consider the example of system 127. The simulated light curves of the two images are shown by blue and green curves in Figure 13. We add the two fluxes to construct the equivalent unresolved light curve, shown in red in the figure, and use it to test our method.
The joint light curve has five patches (corresponding to the five observing seasons) which are shown separately in the left panels of Figure 14. Each patch has observation time range of roughly 240 days with approximately 80 observation epochs. The green curves in the left panels represent the smoothed light curves for each patch with smoothing scale δ = 4.0 and N it = 10 (the average cadence is 3 days approximately). The right panels show the fluctuation estimator Σ as a function of ∆t try for each of the seasons using the corresponding smoothed light curve. We note that for each season we obtain a pair of secondary minima near the true time delay, i.e. at ∆t try ≈ ±∆t true = 38.33 days, which is shown by the vertical dashed lines. However, for all the seasons we also get some 'false minima' at incorrect time delays.
For some observing seasons, e.g. the 3rd and the 5th, the pair of secondary minima near the true time delay is more prominent (with respect to the false minima) as compared to that for other patches. To take advantage of all the available information, we sum the fluctuation measurements, (∆t try ) calculated using Eq. (8), coming from all the seasons. Furthermore, we use multiple smoothing scales, δ = 3.0, 4.0, 5.0 days, instead of using a single smoothing scale and combine the (∆t try ) curves for each smoothing scale too. Finally, we calculate the Σ(∆t try ) curve from this combined (∆t try ) curve using Eq. (9). Figure 15 shows the combined fluctuation estimator Σ as a function of the trial time delay. The dashed lines again mark the true time delay, ∆t try = ±∆t true . We find the pair of secondary minima, occurring at ∆t try = −39.3, 39.4, are now very prominent both having the depth Σ ≈ −1.9. Importantly, the false minima have been suppressed by combining information from different patches and various smoothing scales. Thus, we can identify the object as a highly probable lensed system. The estimated time delay ∆t est = 39.35 is within 3% of the true time delay.
To illustrate the behaviour with unlensed sources, we also analyse the light curve of the first/brightest image by itself (shown in blue in Figure 13). Again, we consider the data patches separately and follow the same procedure as described above for the joint light curve. The combined Σ(∆t try ) curve for this unlensed case is shown in Figure 16. The foremost pair of minima in the Σ(∆t try ) curve occurs at ∆t try = −102.2, 102.1 and has depths Σ = −0.76, − 0.96 which are sufficiently shallow so that one can readily identify this as an unlensed case.
We studied five additional randomly chosen systems (with relatively low noise) from TDC1 (Rung 0) using the same algorithm. The left panels in Figure 17 show the combined Σ(∆t try ) curves for these five systems. The right panels show the corresponding results using the brightest image only, used as true negatives (unlensed). For each of the lensed cases the Σ(∆t try ) curve exhibits a pair of prominent minima that stand out from the other minima, near the true time delay, shown by the dashed vertical lines. Hence, we correctly identify these systems as lensed cases. For the systems 105 and 204, the pair of secondary minima have depths more than Σ = −2.0. None of the unlensed cases shows a prominent pair of secondary minima in the Σ(∆t try ) curves. Therefore, the method correctly identifies all the unlensed light curves. We add these two fluxes to construct the equivalent unresolved light curve (red line), and use it for our analysis.
The estimated time delays for all the six lensed systems from the TDC1 (Rung 0) are compared to the corresponding true time delays in the top part of Table 5, with excellent agreement. For systems 105 and 204 the error is much smaller than a day since these light curves have features that stand out particularly well against the noise.

TDC1, Rung 1
Next, we analyse a random subset of light curves from Rung 1 which has cadence of 3 days on the average with 1 day dispersion and 400 observation epochs. The light curves in Rung 1 are sampled over a period of 10 years. Thus each of the 10 patches in the data is roughly ∼ 120 days long. Thus Rung 1 is not suitable for assessing systems with time delays longer than ∆t true 100 days. We use the trial time delay ∆t try ∈ {−80.0, 80.0} days with a spacing of 0.1 days for Rung 1.
Following the same strategy as for Rung 0, expanded to 10 seasonal patches in each light curve, we compute the fluctuation estimator Σ as a function of trial time delay for some systems from Rung 1 as examples. The left panels in Figure 18 show Σ(∆t try ) considering the joint light curves (lensed cases) for seven systems. For all the systems we find the pair of secondary minima in the Σ(∆t try ) curves near the true time delay (marked by the dashed vertical lines), i.e. ∆t try ≈ ±∆t true . Therefore, one can straightforwardly identify the systems as lensed. However, we see that the secondary minima are somewhat shallower than in Rung 0. This is likely to be due to the shorter overlap between delayed light curves, resulting in weaker signal.
The right panels of Figure 18 show Σ(∆t try ) for true negatives built from the same systems, using only the light curve of the brighter image. Since we do not find a pair of prominent minima, we correctly identify them as unlensed cases.
The estimated time delay for the seven lensed systems from TDC1, Rung 1 are compared with the corresponding truths in the bottom part of Table 5. We find that our estimates match the truth to within 3%, better than our fiducial 5% error.

APPLICATION TO THE COSMOGRAIL LIGHT CURVE OF LENSED QUASAR SDSS J1226-0006
In order to test the performance of our method on real data, we apply it to the publicly available light curves of the doubly imaged lensed quasar SDSS J1226-0006, obtained by the COSMOGRAIL collaboration using the 1.2m Euler Telescope (Millon et al. 2020). The time delay estimated by the COSMOGRAIL team is ∆t true = 33.7 ± 2.7 for this system using the observed image light curves (Millon et al. 2020). We chose SDSS J1226-0006 since the data have low noise level compared to the light curve variability, sufficiently long patches after discarding large gaps, time delay significantly longer than our smoothing scales, and no evidence for strong microlensing. This system is thus a good match to the simulated light curves used in previous sections, providing a good comparison. Analysis of more COSMO-GRAIL systems is left for future work.
The observed light curves of the two images are shown in Figure 19 in blue and green. The sum of these light curves, representing an unresolved joint light curve, is shown in red. We consider this joint light curve as our data and test if we can identify the system as lensed and if we can estimate the time delay using our method.
This system has been observed for 14 years, resulting in 14 seasonal patches. However, we cannot use all the patches since many of them have large time separations (gaps) between two consecutive observations within the patches. Hence we select only patches with (i) a maximum gap of 16 days (separation between any two successive observations), (ii) duration longer than 160 days. We find four such patches which are shown in the left panels of Figure 20. The patches have average cadence of 3.94, 3.79, 3.34 and 4.97 days and include 57, 55, 49 and 42 data points respectively. Although the average cadence in the patches is 3-4 days, there are occasionally significant gaps (∼ 10 days) between consecutive observations. Therefore we need to use smoothing scales that are larger than the average cadence, but not so large as to wipe out all the important features of the light curve. We choose two smoothing scales δ = 8.0 and 9.0 days. The result obtained using a single smoothing scale δ = 8.0 days is already quite good, but the use of two smoothing scales boosts the signal. The green curves in the left panels of Figure 20 represent the smoothed fluxes, corresponding to the smoothing scale δ = 8 and the number of iteration N it = 10, in each data patch. The right panels show the fluctuation estimator Σ as a function of ∆t try for these smoothed light curves in the four patches. We can see that, except for the third patch from the top, we get a strong pair of minima near the correct time delay ∆t try ≈ ±33.7 day, shown by the dashed vertical lines. However, there are a number of other pairs of 'false' minima. We expect the false minima to decrease in depth when we combine the seasons.   Figure 15 for the light curve of only the brightest image (blue light curve in Figure 13), mimicking a true negative case. No prominent pair of minima is found, identifying the light curve correctly as unlensed.
The combined fluctuation estimator Σ(∆t try ) from the four seasonal patches is shown in Figure 21. We find a pair of minima at ∆t try = −28.7, 30.5 days with depth Σ = −1.35, − 1.68 respectively. Therefore, we de-  Figure 17. Fluctuation estimator as a function of time delay for some example systems taken from TDC1, Rung 0, based on the combination of the individual fluctuations from the 5 seasonal patches. The panels on the left show the results for the actual lenses, while the panels on the right show the results for true negatives constructed by analysing only the brighter image for each system.
tect the system as 'highly probable lensed' case, according to our relaxed criteria (described in section 4.1.1). Our estimation of the time delay (∆t est = 29.60 ± 1.48 day) is consistent with the COSMOGRAIL estimation, (∆t true = 33.7±2.7) within the reported 1σ uncertainty. Note that both of the secondary minima in Figure 21 are wide enough to accommodate the time delay estimated by COSMOGRAIL. For completeness, we analyse the observed light curve of the brightest image, which is of course an unlensed case. We consider the same data patches and follow the same procedure as described above for the lensed case. The combined Σ(∆t try ) curve is shown in Figure 22.
The absence of a pair of prominent minima at similar values of |∆t try | correctly identifies this light curve as a true negative.
In summary, we correctly identify the system SDSS J1226-0006 as a lens only using the unresolved light and we estimate the time delay within the COSMOGRAIL uncertainties. Furthermore, our method correctly identifies the light curve of the brightest image only as a true negative.

SUMMARY AND DISCUSSION
We present a novel technique to detect lensed quasar systems and measure the time delays using only unresolved joint light curve data, without any need for  assuming a model/template or additional information. Our method is general and can be applied to survey data with insufficient angular resolution to resolve the lensed quasars, thus opening up the opportunity to identify and measure time delays in a cost effective manner. Our method builds on that proposed by Geiger & Schneider (1996), partially breaking the degeneracy in the reconstructed solutions, by looking for minima in the residual fluctuations in the reconstructed light curve as a function of trial time delay. A global minimum is always found for ∆t try = 0, while doubly imaged quasars are identified by a pair of symmetric minima located at approximately ± the true time delays. The location of the pair of minima provides an estimate of the time delay.
We conduct several tests of our technique. First, we use simulations based on Damped Random Walk for the quasar light curves, with and without noise. We use training sets to define selection criteria and blind sets to test the performance of the method. Second, we use simulated light curves from the Time Delay Challenge. Third, we apply the method to the light curve of the lensed quasar SDSS J1226-0006 observed by the COS-MOGRAIL collaboration using the 1.2m Euler Telescope (Millon et al. 2020).
Our main results can be summarized as follows: 1. For light curves with negligible noise, we find 95% recall and 100% precision, based on conservative criteria. The true time delays are recovered within the sampling resolution of the trial time delay 0.1 days.
2. For light curves with ZTF-like noise, we find that smoothing the light curves using an iterative smoothing algorithm prior to applying our method greatly enhances its performance. After smoothing, we find 15% recall and 100% precision, based on conservative criteria. We then introduce a set of relaxed criteria, that yields precision of 92.3% with a higher recall of 60%. The true time delays are recovered within 3%.
3. For realistic LSST multi-year light curves taken from TDC1, we find that combining the fluctuation statistics from multiple years greatly improves the signal in our fluctuation analysis. We consider a number of doubly imaged systems from Rung 0 and 1 as examples and demonstrate that the method can find the systems as lensed and recover the time delays within 3% of the respective truths. A follow up work will analyse all the systems from different rungs of TDC1 and study the precision and recall for TDC1 compilation comprehensively.
4. For the COSMOGRAIL light curve of SDSS J1226-0006 we find that our method correctly identifies it as a lens from the joint unresolved light curve and estimates the time delay within the COSMOGRAIL uncertainty. We use the light curve of just one of the images to simulate a false negative, and show that it is correctly labeled by our method.
The main strength of our method in comparison to those proposed in the literature (Geiger & Schneider 1996;Shu et al. 2021;Springer & Ofek 2021a,b;Biggio et al. 2021) is that it does not require any additional information, neither any model/template for the quasar light curves, nor any spectroscopic information 13 . Our method is thus well suited for detecting lensed quasar systems only using the light curves from ongoing time domain wide field surveys like Pan-STARRS1, ZTF etc and the future surveys like LSST by Vera C. Rubin Observatory, as well as from existing databases of quasar light curves. The generality of the method suggests that it should be more complete and unbiased than alternatives based on stronger assumptions. For example, our method can identify the lenses and measure the corresponding time delays independent of the power spectrum of the intrinsic quasar light curves, as demonstrated in Appendix C.
In future work, we plan to test this method on large number of simulations in a variety of scenarios, with the goal of obtaining a proper understanding of the uncertainties, and determining the precision and recall as a function of conditions. From this exercise one can also understand which observation strategy is favoured in this approach, e.g. better cadence vs longer observation time vs better noise control etc, or tailor the results to existing and planned surveys. Furthermore, we plan to extend the algorithm to quad systems. Finally, we plan to apply our algorithm to existing datasets and carry out a search for lensed quasars.
F n (t) = F n−1 (t) + 1 N (t) where the normalisation term N (t) is given by.
Here F n−1 (t) is the smoothed flux obtained in the previous step, i.e. at the (n − 1) step. We start with an initial guess which can be a constant number for simplicity, F 0 (t) =constant, and continue iterating for N it times. After a sufficient number of iterations, the smoothed flux becomes independent of the initial guess. The smoothing method has two parameters: the smoothing scale δ and the number of iteration N it .

B. LIGHT CURVES WITH DEGRADED SAMPLING CADENCE
The simulated data that we use for validation in Sections 3 and 4 are sampled with daily cadence. We tested the effect of sampling the same light curves with 3-day cadence.
13 In a separate article we aim to provide the mathematical proof as to how our data driven method detects lenses and measures the time delays by minimizing the fluctuation in reconstructed image light curves. This will further allow us to compare our approach with that of the other proposed methods.  Figure 23. The figure demonstrates that our method performs well even for intrinsic light curves being generated from white noise (left panels) and blue power spectrum (right panels). For simplicity, we assume negligible observational noise and one day of cadence for simplicity. The vertical dashed lines represent ∆ttry = ±∆ttrue.
In general, Σ(∆t try ) become smoother with decreasing cadence, since small timescale features are erased. For the 3-day cadence light curves with negligible noise we identify 6 out of 10 systems as lensed, compared with 9 for the daily cadence sampling.
For the light curves with ZTF-like noise, we find that the target pair of secondary minima at ∆t try ≈ ±∆t true is slightly shallower in the 3-day cadence case. However, we can still identify a few with the relaxed selection criteria. Fortunately, in reality, quasars are typically observed over many years leading to multiple patches in the data. We can independently use those patches since typically the patches are longer than the maximum of ∆t try . As in the case of TDC1 data shown in the main text, it is likely that combining multiple years of observations will improve precision and recall of the estimator. A systematic investigation of the dependency of precision and recall on sampling is left for future work.

C. EXAMPLES OF FLAT AND BLUE POWER SPECTRA
The power spectrum of a time series is defined as where f (ω) is the Fourier transform of the time series f (t). In the sections 3 and 4, we show a number of examples where the intrinsic quasar light curves are simulated using damped random walk and hence they can be described somewhat by the red power spectra (P (ω) ∝ |ω| −γ where γ > 0). In Figure 23 we illustrate that the method performs equally well for two other types of the power spectrum -flat (white noise) and blue (P (ω) ∝ |ω| γ , γ > 0) in the left and right panels respectively 14 . We correctly recover the time delays by following the pair of prominent secondary