Image simulations for strong and weak gravitational lensing

Gravitational lensing has been identified as a powerful tool to address fundamental problems in astrophysics at different scales, ranging from exoplanet identification to dark energy and dark matter characterization in cosmology. Image simulations have played a fundamental role in the realization of the full potential of gravitational lensing by providing a means to address needs such as systematic error characterization, pipeline testing, calibration analyses, code validation, and model development. We present a general overview of the generation and applications of image simulations in strong and weak gravitational lensing


Introduction
Gravitational lensing is defined as the bending of light due to the curvature of spacetime caused by any mass-energy distribution, and it is explained by the theory of General Relativity (GR) (e.g., Einstein (1917); Misner et al. (1973); Carroll (2004)). Lensing phenomena are diverse and depend on the mass-energy distribution of the lens or deflector (e.g., galaxies, cluster of galaxies, the dark matter large scale structure, etc.), as well as on the relative geometric configuration between the source (e.g., distant galaxies, quasars, or even the cosmic microwave background), the lens, and the observer. It is therefore useful and customary to identify several regimes of lensing depending on the particular configuration of the system at hand (Dodelson 2017;Schneider et al. 1992). The case in which multiple images and arcs are produced is known as strong lensing (Oguri 2019;Treu 2010), whereas the less dramatic-but much more common-case, in which the lensing signal is so subtle that it can only be detected statistically, is known as weak lensing (Bartelmann and Maturi 2017;Schneider 2005;Bartelmann and Schneider 2001). A third regime known as microlensing uses the magnification of the image of a source star when it is aligned with another lensing star along the line of sight. If the lensing star hosts a planet, its lensing effects can also be measured. This technique is used to discover planets at large distances from the Earth (Tsapras 2018;Mao 2012;Gaudi 2012). In this paper we will focus on image simulations for strong and weak gravitational lensing.
The astrophysical applications of gravitational lensing are numerous. In cosmology, it is a central technique to test the current concordance model (ΛCDM, a cosmological constant as a form of dark energy plus cold dark matter) that is supported by several independent lines of evidence (ranging from scales from the cosmic microwave background to galaxies). This successful model, however, calls for the existence of unknown componentsdark matter and dark energy-that, together, constitute about 95% of the contents of the Universe (Weinberg et al. 2013;Huterer and Shafer 2017;Abbott et al. 2018;Planck Collaboration et al. 2018). Strong lensing, in particular, is used to study the distribution of dark matter in galaxies and clusters of galaxies, as well as substructures in dark matter halos, and to probe the distribution of total mass (baryonic and non-baryonic) at small scales (Dalal and Kochanek 2002;Vegetti et al. 2012;Nierenberg et al. 2017;Hezaveh et al. 2016;Gilman et al. 2020;Hsueh et al. 2020). The number of strong lensing systems and of giant arcs in them can also be used as a cosmological probe, and measurements of time delays of the multiple images of supernovae and quasars can be used to probe the geometry of the universe and its kinematics through cosmography measurements. In general, strong lensing allows for the determination of cosmological parameters such as the Hubble-Lemaître parameter (H 0 ) (and to study the current H 0 tension, e.g., (Bernal et al. 2016;Pandey et al. 2019;), the dark energy equation of state (w), and the total mass density (Ω m ), and for testing alternatives to the standard cosmological model and GR (Jullo et al. 2010;Magaña et al. 2015;Caminha et al. 2016;Acebron et al. 2018;Grillo et al. 2018). Massive clusters also act as cosmic telescopes, and, through strong lensing magnification, allow for the identification and study of distant galaxies from the early universe, black holes, and quasars and active galactic nuclei that otherwise would not be detected with the native instrument resolutions (Johnson et al. 2017b,a;Livermore et al. 2017). As probe of both the growth of structure and the expansion history of the Universe, weak lensing of the large-scale structure or cosmic shear has been used to investigate the cause of the observed accelerated expansion of the universe (a form of dark energy or models beyond GR) (Aylor et al. 2019;Poulin et al. 2019;Di Valentino et al. 2017;Abbott et al. 2018;Kilbinger 2015;MUNSHI et al. 2008;Hoekstra and Jain 2008;Gunn 1967)). 1 While lensing is important for many different applications and its theoretical foundations are well understood, measurement of the signal in each regime is subject to several challenges. Multiple-imaging due to strong lensing is a relatively rare phenomenon as it only affects a small fraction of distant sources (Press and Gunn 1973). The measurements usually also require subarcsecond resolutions, and, depending on the applications, knowledge of the redshifts of the source and the deflector. When redshift information is necessary and it is not possible to obtain it via spectroscopy, photometric redshifts are often used as estimates. However, these are difficult to calculate in crowded fields due to the mixing of light from the foreground object with the background source (Treu 2010;Jouvel et al. 2014;Molino et al. 2017). Moreover, in some cases they might prove to be not accurate enough for the applications at hand .
Weak gravitational lensing, on the other hand, is fundamentally limited by shape noise-the standard deviation of the intrinsic galaxy shape distribution-requiring imaging a large number of galaxies to statistically reduce it. In order to address this limitation, several current (e.g., the Kilo Degree Survey, KiDS 2 ), the Hyper Suprime Cam survey (HSC 3 ), the Dark Energy Survey (DES 4 )) and future (e.g., the Dark Energy Spectroscopic Instrument (DESI, DESI Collaboration et al. 2016), the Prime Focus Spectrograph (PFS, Takada et al. 2014), the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) (LSST 5 , Ivezić et al. 2019), Euclid 6 (Laureijs et al. 2011), and the Wide Field Infrared Survey Telescope, WFIRST 7 (Spergel et al. 2015)) projects have and are being designed to collect large amounts of data over large fields of view, reaching a point in which systematic errors are comparable or even dominant over statistical errors. Large data sets will also present an opportunity to increase the number of strong lensing systems found (including more complex systems with multiple sources and deflectors).
Image simulations play a central role in addressing the observational and systematic challenges faced during the measurement of strong and weak gravitational lensing. In general, simulations with known inputs are a fundamental tool for important tasks such as software testing and validation, systematic error characterization, requirement validation, model testing and developing, calibration, analysis testing, etc. Depending on the objective at hand and the computational resources available, there is usually a trade-off between realism and efficiency that results in simulations with varying degrees of complexity (e.g., end-to-end simulations vs simulations where single parameters are changed one at a time).
The structure of this paper is as follows. Section §2 briefly reviews the basic theory of gravitational lensing (for more detailed reviews, see e.g., (Schneider 2005;Hoekstra and Jain 2008;Bartelmann 2010;Kilbinger 2015)). Section §3 provides an overview of image simulations in for strong and weak gravitational lensing studies. Simulations of multiple images and giant arcs in strong lensing by galaxy clusters via ray-tracing is explained, as well as how pipelines for strong lensing can be used to simulate observation through any particular instrument. We discuss the use of simulation in assessing lensing inversion codes for mass modeling, and we also point out the increasing use of simulated strong lensing systems as training sets in finding codes that are based on machine learning techniques. We also discuss the role of simulations in redshift distribution estimations, weak lensing systematic errors characterization, and the creation of mock images that include gravitational lensing effects in the context of observations from large astronomical surveys. We conclude in Section §4.

Gravitational lensing basics
Photons from background sources travel along null-geodesics in spacetime. When passing a nearby mass density concentration, their trajectories are bent by an amount determined by the deflection angle,ˆ α. The deflection angle can be derived by considering the trajectory of a photon in a weakly (assuming that Φ/c 2 « 1 andΦ = 0, where Φ can be thought of as the scalar Newtonian potential that obeys Poisson's equation) perturbed Friedmann-Lemaître-Robertson-Walker metric and solving the perturbed geodesic equation. Alternatively, it can be derived by using Fermat's Principle and assuming that light is traveling through a medium with effective refraction index n = 1−2Φ/c 2 . The deflection angle is thus given by: For a point mass-with a gravitational potential given by Φ(r) = −GM/r-the deflection angle is given by 4GM/c 2 b, where b is known as the impact parameter or the distance of closest approach to the lens or deflector. For most situations of astrophysical interest, the spatial scales of the lens itself are much smaller than the angular diameter distances between the source and the deflector (D ds ) and between the deflector and the observer (D d ). Under these circumstances, it is possible to use the thin lens approximation, where the 3D mass distribution of the deflector is projected to a 2D plane-the lens planeperpendicular to the line-of-sight and characterized by its surface-mass density (with ξ as the vector in the lens plane): The deflection angle at any point ξ is given by the sum of the the contributions due to each individual mass element in the plane: The line-of-sight defines the optical axis between the observer, the lens plane, and the source plane. For a source at an angular position β that emits a ray of light with an impact parameter ξ = θD d on the lens plane, the general mapping between β and θ is given by the the lens equation: The lens equation is, in general, non-linear and can have multiple solutions (this is formally the strong lensing regime). The second part of Eq. 4 defines the reduced deflection angle α, which can be written as Φds . The lens equation thus implies that ( θ − β) can be expressed as the gradient of a potential ψ( θ), known as the lensing potential, that is a scaled projection of the 3D Newtonian potential Φ Defining ∇ θ = D d ∇ ⊥ as the angular gradient, the lens equation can be written as Eq. 6 is Fermat's Principle, ∇ θ t( θ, β) = 0, with the time delay surface t( θ) defined as: The time delay surface consists of a geometric term and a gravitational time delay term known as Shapiro delay. Multiple images are manifested as the stationary points of the surface, and their arrival time difference will depend on the Hubble-Lemaître constant through the angular diameter distances. For a point mass, the lens equation c 2 θ . When the source and the lens are aligned, the image formed is a ring with an Einstein radius of The Laplacian of the lensing potential is proportional to the surface-mass density of the lens: The term κ is known as the convergence, and the critical surface mass density is defined as Σ c ≡ c 2 Ds 4πGD ds D d . In the case when the images of the lensed sources are small compared to the spatial scales in which the deflection angle varies considerably, the lens equation can be linearized to obtain local information of the mapping. Its Jacobian is given by by

and can be written as
where the complex shear γ = γ 1 + iγ 2 is defined in terms of the derivatives of the lensing potential as γ 1 ≡ 1 2 (∂ 1 ∂ 1 ψ − ∂ 2 ∂ 2 ψ) and γ 2 ≡ ∂ 1 ∂ 2 ψ, and the reduced shear g is defined as γ/(1 − κ). The inverse of the Jacobian in Eq. 10 is known as the magnification tensor, and its determinant represents the local magnification µ in the limit of a point source: The curves in the image plane for which the magnification is formally equal to infinity (det A = 0) are known as critical curves. The corresponding curves in the source plane are known as caustics. In practice, the magnification is never infinite and the finite size of extended sources and other optical effects become important (Schneider et al. 1992). However, if a source lies close to a caustic curve, it will be highly magnified and distorted, producing images such as giant arcs in cluster, for example.

Lensing by galaxy clusters
The lens equation in Eq. 4 is, in general, non-linear, and it will have multiple solutions if the mass of the deflector is large enough and there is a geometrical alignment between the deflector and the background sources. Multiple images and giant arcs can be produced by systems such as galaxy clusters-the largest gravitationally bound objects in the Universe. In order to generate simulated images that include the effects of lensing by systems like these, a mass distribution of the deflector must be produced. For this purpose, analytical models such as a single isothermal sphere or an elliptical power law (Tessore and Metcalf 2015) can be used, with the advantage of being computationally fast, but with the limitation of being too idealized. This limitation can be partially overcome by the use of semi-analytical algorithms such as MOKA , in which each component of the system (e.g, host dark matter halo, central galaxy, and satellites) is modelled analytically but in which information from state-of-the-art numerical simulations is also used.
The use of full N-body and hydro-dynamical simulations is an approach to model cluster lenses, with the lensing effects later incorporated through ray-tracing algorithms. In N-body simulations, a box is filled with N massive particles that interact only through gravity (e.g., Millennium (Springel et al. 2005 Numerical simulations provide models for the distribution of dark matter and gas in clusters (Gardini et al. 2004;). The lensing properties of dark matter halos generated through numerical simulations can be studied by the use of the ray tracing technique, in which the trajectory of each individual photon is propagated and followed through the system, assuming that the photons emitted from the source are independent (Killedar et al. 2011;Meneghetti et al. 2008Meneghetti et al. , 2010Li et al. 2016;. A large number of light rays is sent through the mass distribution of the lens, where the deflection of their trajectories is computed and the distorted and magnified images of the background sources are reconstructed. Since each ray is independent, this approach lends itself to computational paralellization. Lens models using numerical methods provide information on positions and masses, which can be used to calculate the deflection angle of any light ray that intercepts the lens plane at a given normalized, dimensionless position x ≡ ξ/ξ 0 (for some characteristic distance ξ 0 in the lens plane) by summing the contribution from all the possible lens particles in a system with N particles: The computational time of this direct approach is of the order of N 2 . A more efficient approach projects the lens particle positions to a regular grid of size M × M , and the mass in each cell is calculated by summing the masses of the particles that belong to that particular cell. A bundle of rays is traced through another regular grid that covers the lens plane, and the deflection angle for each ray (i,j) is computed by adding up the contributions from each cell (k, l) in the grid: Comparing Eq.13 with the definition of convergence, it can be seen that the deflection angle can be written as a convolution of the convergence with the kernel: The deflection angle can be obtained by applying the convolution theorem in Fourier space (˜ α( k) = 2πκ( k)˜ K( k), where the tilde indicates the Fourier transform). Working in Fourier space offers speed advantages (provided that specific boundary and periodicity conditions are met), although the deflection angle can also be obtained by means of direct calculations using tree methods (Meneghetti et al. 2010;Rasia et al. 2012;. Once the deflection angles are calculated, lensing properties such as shear and magnification can be obtained through the Jacobian of the lens equation (Eq. 10). In the case that multiple deflectors are considered, the lens equation can be generalized to include multiple lens planes, N p : Analogously, multiple source planes can also be included to provide a more accurate representation of the system at hand. For example, in the case of an isolated lens such as a galaxy cluster the source planes are generated to account for the redshift dependence of the geometric factor D ds /D s : source objects are divided into a certain number of redshift bins with centers equally spaced in lensing distance, each one defining a source plane.

Observation pipelines
Using the ray-tracing formalism to include lensing effects, a model of a galaxy cluster or multiple clusters, a given a distribution of sources, and information on observational conditions (e.g., bandpass, detector parameters, integration time, etc.), it is possible to construct an observation pipeline (e.g, Skylens (Meneghetti et al. 2008(Meneghetti et al. , 2010, PICS (Li et al. 2016)) that produces simulated images of systems observed through any particular instrument. The pipeline Skylens, for example, generates source galaxies as denoised postage stamps after drawing from the galaxy sample of the Hubble eXtreme Deep Field (HXDF (Illingworth et al. 2013)), and implements the capability to use multiple source and lens planes. Lens models are produced through the use of the semi-analytic code MOKA, although any analytical dark matter halo model or numerical simulation can be used as well. Input parameters such as exposure time (t exp ), the total throughput function (T (λ), which includes the quantum efficiency of the detector, the mirror reflectivity, the transmission curve of the filters and the lenses in the optical system, and the total extinction function), sky coordinates, effective telescope diameter (D), detector gain (g), readout and Poisson noise, and pixel scale (p) must be specified by the user in order to prepare a virtual observation through a chosen instrument and telescope, and calculate the measured flux of a galaxy with a given spectral energy distribution. Following the discussion and formulas in Grazian et al. (2004) (see also ), the total photon counts on the detector (in analog-to-digital units, ADU, or digital numbers, DN) from a source with surface brightness I( x, λ) (erg s −1 cm −2 Hz −1 arcsec −2 ), is given by where are the contributions from the source and the sky, respectively (with h as Planck's constant and S(λ) is the sky flux per square arcsec), and n dark is the dark current. The zero point of the image, in the AB system, can be calculated as (Grazian et al. 2004)

Ray tracing
Once the telescope, the detector, and the deflector (or deflectors) have been defined, Skylens accounts for the tidal effect of matter along the line of sight by using Equation (15) and reconstructs the images of the sources. The images are then convolved by the instrumental PSF, and different sources of noise and sky background are added according to he parameters of the simulated observation. Fig. 1 shows an example of an image simulation of the Hubble Space Telescope (HST) Advance Camera for Surveys Wide Field Channel generated by Skylens through the combination of three different filters.

Image simulations in weak and strong lensing mass modeling
One of the most important applications of strong and weak lensing by galaxy clusters is constraining of the total mass distribution of the lens, dominated by dark matter (Jullo et al. 2007;Caminha et al. 2019). Strong lensing observables such as flux ratios, relative image positions, and time delays are used in combination with weak lensing, galactic kinematics, and x-ray information to constrain the matter distributions of clusters via a lens inversion process. Knowledge of the matter profile of galaxy clusters is crucial in the understanding of the interplay between dark matter and baryons, as well as the process of large-scale structure formation, and to test the predictions of the standard ΛCDM cosmological model. High-resolution images of galaxy clusters taken with the Hubble Space Telescope (e.g., CLASH (Postman et al. 2012), the HST Frontier Fields ) have enabled the community to develop and test different inversion algorithms that, however, do not always result in consistent reconstructions, even when applied to the same systems (see, for example, the analyses of the system MACS J1149.5+2223 by Smith et al. (2009) and Zitrin and Broadhurst (2009)). The process of mass modeling is subject to many challenges, and in this context, image simulations provide a valuable tool to assess the performance of inversion codes, to find ways to improve them, and to identify the properties of lenses that are mostly impacted by errors during the construction of a lens model (e.g., cluster ellipticities and dynamics, substructures, baryonic physics, etc.). For example, Meneghetti et al. (2017) use MOKA and Skylens to simulate strong lensing observations through galaxy clusters with the characteristics of the HST Frontier Fields. The simulations were used to test several lensing inversion methods (parametric, non-parametric or free-form, and hybrid) by different groups.
Simulations with Skylens have also been used to test the robustness of estimating cluster masses by using information from weak lensing, and to study hydrostatic biases by comparing with simulated x-rays observations (Meneghetti et al. 2010;Rasia et al. 2012).

Strong lensing simulations and machine learning methods
Image simulations of strong lensing systems have been used to predict that current and future wide-field galaxy surveys (e.g, DES, HSC survey, KiDS, LSST, Euclid, and WFIRST ) will produce several to hundreds of thousands of galaxy-galaxy strong lensing systems (Collett 2015). Many efforts have recently focused on employing techniques from computer vision and machine learning to go beyond traditional approaches such as visual searches of "blue" arcs near "red" galaxies (Diehl et al. 2017), goodness of fit examinations after fitting a model to all candidates (Marshall et al. 2009;Chan et al. 2015), and public science challenges  to discover new strong lensing systems in the large datasets. Neural networks have demonstrated to be able to distinguish between simulated lenses and non-lenses (Lanusse et al. 2017;Hezaveh et al. 2017). Jacobs et al. (2019aJacobs et al. ( ,b, 2017 have used convolutional neural networks (CNNs, (LeCun et al. 1989)) to produce a catalog of galaxy-galaxy strong lenses (including high-redshift systems) using data from the Dark Energy Survey, and Petrillo et al. (2017Petrillo et al. ( , 2019 have correspondingly found hundreds of candidates in KiDS data. Jacobs et al. (2019b) use the LENSPOP (Collett 2015) code to generate a training set that consists of hundreds of thousands of labeled simulated examples to train a CNN that classifies lenses and non-lenses. Metcalf et al. (2019) use N-body (Millenium) and ray-tracing (GLAMER, ; ) simulations to analyze a variety of methods including CNN's, visual inspection, and arc finders to assess their efficiency and completeness, and identify biases in the face of large future datasets.

Image simulations for weak lensing systematic errors characterization
Recent years have seen the rapid advances in astronomical instrumentation and technology enabling the production of large data sets for astronomical investigations. The decrease in statistical uncertainties in these large data sets results in more stringent requirements to control, characterize, and correct for systematic errors. Weak gravitational lensing of the large-scale structure (cosmic shear) plays a central role in these galaxy surveys as a tool to probe the validity of the standard cosmological model and the nature of the dark sector of the Universe. In turn, the understanding and correction of systematic errors lies at the heart of weak gravitational lensing. Systematic errors include shape measurement precision, PSF measurement and deconvolution, instrument signatures, detector (e.g., charge-couple devices and near-infrared sensors) effects, measurement of photometric redshifts, intrinsic alignments, deblending, etc. (for a comprehensive review see Mandelbaum (2018)). Image simulations with weak gravitational lensing effects have been used to understand these errors and correct for them to the precision required by current and future experiments. End-to-end simulations have been created to bring together the weak lensing community under a common framework and produce challenges such as the Shear TEsting Programme (STEP, (Heymans et al. 2006)) and GRavitational lEnsing Accuracy Testing (GREAT8, (Bridle et al. 2010)). Further iterations of these community-wide challenges (GREAT10 (Kitching et al. 2012) and GREAT3 (Mandelbaum et al. 2014)) focused on simulations in which one parameter a time in the simulation pipeline could be controlled, allowing for a more detailed look at each step in the process of shape galaxy measurement for cosmic shear. The simulations tools to create the images in GREAT3 are publicly available as the modular image simulation code GalSim 8 , which has been used extensively to study the impact of systematic effects on weak lensing measurements (e.g., (Plazas et al. 2016;Kannawadi et al. 2015;Gruen et al. 2015;Kamath et al. 2019)) or as part of pipelines that create mock images where the selection function of the system is directly measured by inserting simulated objects into real data (e.g., Balrog (Suchyta et al. 2016), SynPipe (Huang et al. 2017)). Simulations have also played a central role in the development, testing, and validation of shear estimation methods (e.g., (Sheldon et al. 2019;Plazas 2012)). In the context of the future WFIRST weak lensing program, Troxel et al. (Troxel et al. 2019) use Galsim to render images for a simulation suite designed to carefully study weak lensing systematic errors relevant for WFIRST 's High-latitude Imaging Survey (Doré et al. 2018). Fig. 2 shows examples of images produced with GalSim.

Synthetic sky images and catalogs
Ray tracing techniques(3.1.2)-in conjunction with full or approximate N-body simulations (e.g., COLA (Tassev et al. 2013;Izard et al. 2017))-are also used in studies of cosmic shear. For example,  use the ADDGALS algorithm to populate dark matter simulations with galaxies, and include gravitational lensing effects via the curved-sky ray-tracing code CALCLENS (Becker 2013). They produce a set of 18 synthetic DES Year 1 catalogs to z = 2.35 and to a depth of r ≈ 26 that include galaxy properties (e.g., position, ellipticities, magnitude), photometric errors, and galaxy cluster catalogs (by applying finders such as RedMapper (Rykoff et al. 2014). These simulations have been used to calculate cosmological observables and tests, including quantities relevant for weak gravitational lensing analyses such as correlation functions (galaxy-galaxy, galaxy-shear, and position-position correlation functions; or 3×2 correlation analyses (Abbott et al. 2018)) and photometric redshift distributions. Simulations such as MICE (Fosalba et al. 2015) use a similar approach to DeRose et al. (2019), but assume different approximations in the lensing calculation with ray-tracing. Other simulations more focused on weak-lensing statistics using full ray-tracing have also recently been released (Takahashi et al. 2017;Harnois-Deraps et al. 2013).
Simulated wide-field images are also used to study systematic errors for weak lensing in a more comprehensive framework that includes data to calibrate the images. ; Tortorelli et al. (2020) use the Ultra Fast Image Generator (UFIG, Bergé et al. (2013)) to produce an implementation of the Monte Carlo Control Loops (MCCL, Refregier and Amara (2014)) framework for weak lensing systematic errors studies and apply it to DES data from science verification (e.g., (Abbott et al. 2016)) data. More recently, Kacprzak et al. (2019) apply MCCL to DES year 1 data for cosmic shear studies.
Alternative approaches to forward-modeling simulation codes such as UFIG include methodologies that use machine learning methods such as Generative Adversarial Networks (GANs) to let the machine infer both the astrophysical and the instrumentation properties of a data set (Smith and Geach 2019) The Dark Energy Science Collaboration of LSST (LSST DESC 9 ) also uses wide-field simulations to prepare for the new challenges that the large and complex dataset produced by the LSST of the Rubin Observatory (approximately 40 billion objects and 50 Pb of raw data (Jurić et al. 2017;Ivezic et al. 2008)) will generate. The latest of these efforts is referred to as Data Challenge 2 (DC2), the second of three planned synthetic datasets (Sánchez et al. 2020) designed to develop and validate data reduction methodologies and to study the impact of systematic effects on LSST data. DC2 includes the production, validation, and analysis of a 5000 sq-deg mock extragalactic catalogs and 300 sq-deg end-toend simulation. The extragalactic catalog is produced by the CosmoDC2 pipeline (Korytov et al. 2019), and includes the N-body simulation Outer Rim of 1 trillion particles up to z = 10, produced with the Hybrid/Hardware Accelerated Cosmology Code (HACC) . The simulations include a lensing pipeline that uses particle data from Outer Rim to generate light cones, project the particles in redshift shells and uses a ray-tracing algorithm to produce curved-sky lensing maps. The final available information of objects in the catalogs include positions, sizes, shapes, shear, magnification, convergence, magnitudes, etc. The DC2 image simulations uses a subset of the extragalactic catalogs (300 sq-deg) and implements two approaches to produce the images: the Monte Carlo photon shooting code PhoSim (Peterson et al. 2015) and the code ImSim 10 which relies on GalSim to produce the images passing specific LSST information.
3.6. Image simulations to assess the accuracy of photometric redshifts The determination of accurate galaxy redshift distributions is a key requirement in weak lensing for precision cosmology, especially for the level of requirements needed for the next generation ("stage IV" experiments (Albrecht et al. 2006)) galaxy survey projects that will use weak and strong lensing such as LSST, Euclid, and WFIRST ). As such, image simulations constitute a valuable tool to test new algorithms and characterize any biases. Bellagamba et al. (2012), for example, use Skylens to generate mock images of ground-and spaced-base surveys in the different bands in the optical and the near-infrared wavelengths, and test the performance of template fitting methods of redshift estimation (e.g., BPZ (Benítez 2000)) under different parameters (such as seeing and depth). Bonnett et al. (2016) use simulations created with UFIG to calibrate and validate both template fitting and training methods (where a set of galaxies with known spectroscopic methods is used to train machine-learning algorithms, e.g., TPZ (Carrasco Kind and Brunner 2013), ArborZ (Gerdes et al. 2010)) in the context of the DES science verification data.

Conclusion
We have presented a general overview with some examples of the role that image simulations play in strong and weak gravitational lensing. After 100 years of the formulation of General Relativity, gravitational lensing has positioned itself as one of the most important tools in different areas of astrophysics. In particular, the strong and weak regimes of lensing have become central techniques in current and future experiments that seek to learn more about fundamental problems such as the nature of dark matter and dark energy, the processes of galaxy formation and evolution, the formation of the large-scale structure of the Universe, and the validity of alternative theories of gravity. Many efforts are focused on the understanding, characterization, and correction of systematic errors and measurement biases in both regimes. Thanks to efforts like these, cosmic shear surveys such as DES have produced the most precise constraints on cosmology from a ground-based survey (Abbott et al. 2018) (with a precision comparable to cosmic microwave background observations), measuring the combination σ 8 (Ω m /0.3) 0.5 to a precision of about 3.5% . With the advent of stage IV weak lensing surveys with lower statistical errors such as LSST, WFIRST, and Euclid, the constraints on cosmological parameters will keep improving (Schaan et al. 2017), but only if systematic errors can be controlled. These wide-field surveys will also increase the number of non-transient (e.g., double source plane lenses) and transient (e.g., lensed quasars and supernovae) strong lensing events, which will help place constraints on the dark energy equation of state parameter and on the current H 0 tension. In this context, the creation of simulated images with known inputs is an invaluable tool to achieve the increased required accuracy demanded by these projects and fulfill their scientific potential.