This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.

Studying the Imaging Characteristics of Ultra Violet Imaging Telescope (UVIT) through Numerical Simulations

, , and

Published 2009 June 4 © 2009. The Astronomical Society of the Pacific. All rights reserved. Printed in U.S.A.
, , Citation Mudit K. Srivastava et al 2009 PASP 121 621 DOI 10.1086/603543

1538-3873/121/880/621

ABSTRACT

Ultra Violet Imaging Telescope (UVIT) is one of the five payloads aboard the Indian Space Research Organization (ISRO)'s ASTROSAT space mission, with broad science objectives extending from individual hot stars and star-forming regions to active galactic nuclei. The imaging performance of UVIT would depend on several factors in addition to the optics: e.g., resolution of the detectors, satellite drift and jitter, image frame acquisition rate, sky background, source intensity, etc. The use of intensified CMOS-imager–based photon counting detectors in UVIT put their own complexity over reconstruction of the images. All these factors could lead to several systematic effects in the reconstructed images. A study has been done through numerical simulations with artificial point sources and archival image of a galaxy from the GALEX data archive, to explore the effects of the above-mentioned parameters on the reconstructed images. In particular, the issues of angular resolution, photometric accuracy, and photometric nonlinearity associated with the intensified CMOS-imager–based photon counting detectors have been investigated. The photon events in image frames are detected by three different centroid algorithms with some energy thresholds. Our results show that in the presence of bright sources, reconstructed images from UVIT would suffer from photometric distortion in a complex way, and the presence of overlapping photon events could lead to complex patterns near the bright sources. Further, the angular resolution, photometric accuracy, and distortion would depend on the values of various thresholds chosen to detect photon events.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

ASTROSAT is a multiwavelength space observatory to be launched by the Indian Space Research Organization (ISRO) in 2009–2010. It consists of five astronomical payloads that would allow simultaneous multiwavelength observations from X-ray to ultraviolet (UV) of astronomical objects. The Ultra Violet Imaging Telescope (UVIT) is one of the payloads and is being developed with the aim of providing flux-calibrated images at a spatial resolution of ∼1.5''. UVIT records images simultaneously in three channels: far-ultraviolet (FUV, 1300–1800 Å), near-ultraviolet (NUV, 1800–3000 Å), and visible (VIS, 3200–5300 Å), simultaneously with ∼0.5° field of view. UVIT is configured as twin telescopes based on Ritchey-Chrétien design with an aperture of ∼375 mm. While one of these would make images in FUV, the other would be used for the NUV and visible regimes. Detectors used in UVIT are also photon-counting design based on microchannel plate (MCP) image-intensifier technology. Similar detectors have also been used in other UV missions (Far Ultraviolet Spectroscopic Explorer, FUSE; GALEX, etc.); however, the detector readout scheme in UVIT differs from these missions. FUSE and GALEX use double-delay line and cross-delay line anode systems for readout (Jelinsky et al. 2003; Sahnow 2003), while UVIT detectors incorporate a readout system based on CMOS image sensors. Due to this, imaging characteristics of the UVIT are expected to be very different from these UV missions. Readout systems based on CCDs have been discussed in the literature (Bellis et al. 1991; Siegmund 1999). The performance of the CMOS-based readout system is expected to be similar to those. Various features of UVIT and its detectors are given in Table 1.

The UVIT detectors are being developed through a collaboration between the Canadian Space Agency (CSA) and ISRO. The basic design of these detectors is shown in Figure 1A. In this scheme, a UV photon is detected through its footprint in form of a light splash on a CMOS sensor. As an example, a section of CMOS data frame containing some footprints is shown in Figure 1B. These data frames were recorded during the laboratory experiments of the detectors. Experimental studies done by Hutchings et al. (2007) shows that one photon event produced a light splash on CMOS following roughly a Gaussian distribution with full-width at half maximum (FWHM) of ∼1.5 pixel. They also reported the performance of several centroid algorithms used to determine the centroid of light splash. This centroid gives the exact coordinates of the photon at much higher resolution than one CMOS pixel. UVIT detectors are designed to obtain a resolution of ∼1''. Further, these intensified photon-counting detectors can be run either in a photon-counting mode with a very high gain (>10,000 electrons per photon on CMOS) or in integrating mode with a low gain (∼1000 electrons per photon on CMOS). The photon-counting mode is normal mode for UV channels while the visible channel works with integrating mode. Later images are to be reconstructed using the centroid data. The spatial resolution and photometric properties of the reconstructed images would depend on several factors e.g. on the accuracy in satellite drift correction, detector hardware design, choice of centroid algorithms, the effects of background, frame acquisition rate etc. In particular, the nonlinearity is expected to be present in reconstructed images due to finite exposure time of photon-counting detectors.

Fig. 1.—

Fig. 1.— (A) Schematics of MCP based photon-counting detector used in UVIT. (B) Section of CMOS data frame containing some photon events, recorded during the laboratory experiments of the detectors.

In this article we report on the numerical simulations and results and discuss how they can be used to estimate the angular resolution and photometric properties of the UVIT system. Section 2 describes the process and results of the simulations to estimate the satellite drift. UVIT data simulation, image reconstruction, various parameters/thresholds, and related errors are discussed in § 3. The various results regarding the photometric properties of the UVIT images for simulated point sources are presented in § 4. In § 5, simulated UVIT images of some of the extended sky sources (adopted from archival GALEX and Hubble Space Telescope, HST ACS data) are presented. The angular resolution of reconstructed UVIT images is discussed in § 6. Finally § 7 summarizes all the results.

2. SIMULATION TO ESTIMATE DRIFT OF ASTROSAT USING VISIBLE CHANNEL OF UVIT

As estimated by ISRO, the ASTROSAT would drift by ∼0.2''s-1 during the observations. As the satellite drift is quite large, sharp images are obtained by taking a series of short exposures and adding these with shifts corresponding to the drift. Images from the visible channel (3200–5300 Å), taken every ∼1 s, can be used to track the drift aspect of the satellite. We have simulated this process for the fields of various stars from the online ESO HST Guide Star Catalog. Typically a field of ∼29' diameter is selected to match the field of UVIT. Simulations have been done for seven fields at different galactic latitudes. For each field, either all the stars brighter than magnitude 15, or the brightest 100 stars, are taken. The visible channel has an effective area of ∼25 cm2; i.e., a star of magnitude 15 corresponds to ∼25 detected photon s-1.

In the simulation, we generate images for 1 s per exposure. As the satellite is drifting, each of these images is obtained by applying shift and then adding 10 frames, each of 0.1 s exposure. To obtain the 0.1 s exposure frames, we proceed in the following sequence:

  • 1.  
    Calculate the total number of photons obtained in 1 s for each star and for the background.
  • 2.  
    The total number of photons is divided randomly in 10 frames each of 0.1 s.
  • 3.  
    Background photons are randomly distributed in each 0frame.
  • 4.  
    For the point spread function (PSF) of the optics, the photons for each star are distributed as per a Gaussian distribution with a root mean square (rms) of 0.3 pixel on either axis.
  • 5.  
    Each of the photons is now converted to an average of 16 electrons.
  • 6.  
    The electrons are spatially distributed as per a Gaussian distribution with an rms of 0.7 pixel on each axis, which is taken as the footprint of photon events on the CMOS.

We have assumed Gaussian statistics (with variance equal to the average number) to estimate the number of photons in each 1 s exposure, and to estimate the number of electrons in each photon. This is an acceptable approximation as the average number of photons (or electrons) is ≫1. Further, we have assumed that each photon on average generates 16 electrons, while in practice this number is expected to be >1000. This has been done for ease of computation, as 16 is large enough to reduce the rms error in effective location of the photon to pixel; i.e., below the rms of 0.3 pixel for the PSF; in any case this low value of 16 only overestimates the errors in finding positions of the stars. To be consistent with the assumed low number of 16 electrons per photon, the read noise is assumed as 1 electron rms.

To estimate the drift of the ASTROSAT, first 10 images (each of 1 s exposure) are added to obtain a reference image. The centroids of stars in the reference image are taken as standard, and the centroids in other 1 s exposure images are compared with this to find drift parameters, i.e., drifts in X and Y direction and the rotation, of the other images. Further, in order to minimize the errors due to noise, a local second-order polynomial fit is made to the estimated drift parameters, for each interval of 20 s. The fit is used to find a new value of the drift parameters at the central point of the interval.

We have obtained a time series on drift of the aspect from ISRO. The series is obtained by simulating pointing of ASTROSAT (with its payloads) for the typical characteristics of the rate gyros and star tracker used by ISRO. Figure 2 shows the simulated cumulative satellite drift along pitch and yaw directions for 3000 s. The effect of drift in the roll is negligible on the CMOS detector as compared to drift in these two axes and hence not shown. From this time series we have selected the worst period (850–1150 s) to estimate errors in determining the drift; this period is assessed as the worst because it coincides with rotation of one of the payloads on the satellite, and shows the largest drift rates. For a graphical illustration of worst case errors, results for one of the fields (with center Galactic latitude = +90° and Galactic longitude = 80°) and for 10 photons (as compared to the expected 25 photons) for a magnitude 15 star, are shown in Figure 3. It is evident from the plot shown in Figure 3 that even in the worst case, the satellite drift can be determined with an rms error < 0.05 CMOS pixels or < 0.15'' on either axis.

Fig. 2.—

Fig. 2.— Simulated drift of the ASTROSAT Satellite (data provided by ISRO Satellite Centre). This drift is taken as input to simulate the process of drift estimation using visible channel of UVIT. The drift in pitch and yaw directions are shown.

Fig. 3.—

Fig. 3.— Errors in the estimation of satellite pitch using visible channel of UVIT. Result is shown for the time interval 850–1150 s as this interval provides the worst case conditions for the drift estimation.

3. SIMULATIONS TO GENERATE IMAGES FROM UVIT

We simulated UVIT image frames as close as possible to real observations, by considering all the known effects that could deteriorate the final image quality. This includes the effects of sky background, source intensity and saturation effects, satellite drift and jitter, optics performance, image frame acquisition rate, effects of variations in dark frames, and various detector parameters. Though the final aim of these simulations was to study the effects of these factors on the photometric properties and angular resolution of extended astronomical objects; we also simulated images of artificial point sources to explore and understand the origin of various features in reconstructed images.

3.1. Generating the UVIT Data Frames

Data frames were simulated from input images of artificial point sources and extended sources (to be discussed in § 5). The images for extended sources had an input pixel scale of 0.2'' or 0.5''. Given total integration time and UVIT parameters (mirror diameter, CMOS pixel scale, frame acquisition rate, etc.) each input pixel would produce an average number of photons during the whole integration time. The actual photons in each input pixel were then generated using Poisson statistics. These photons were randomly distributed into total numbers of data frames. The blurring due to optics and detectors was approximated by a two-dimensional (2D) Gaussian function with standard deviation (σ) of 0.7'' (on either axis). The spatial position of each of the photons in data frames was determined by this function and recorded with accuracy of 1/8 of an input pixel. Further drifts of the satellite (Fig. 2) along pitch, roll, and yaw directions were incorporated, so that the data frames were drifted with respect to each other.

The photon-counting detector is simulated by converting each of the photons into a bunch of electrons on CMOS; i.e., a photon event. This includes the effects of MCP, phosphor screen, and fiber taper (see Fig. 1A). The number of electrons in each bunch is obtained from a Gaussian distribution having 30,000 electrons as average with σ of 6000 electrons. The footprint of each of the photon events was distributed over the area of 5 × 5 CMOS pixels following a 2D symmetric Gaussian distribution with σ of 0.7 CMOS pixel. Figure 4A shows the footprint of a simulated photon event. A Poisson distribution was then invoked to get the actual number of electrons in a CMOS pixel. This number was further divided by 20 to get the output in digital units (DU). Finally, a randomly selected raw dark frame was added to each of the data frames. Thus UVIT data frames of 512 × 512 CMOS pixels were generated containing the footprints of photon events against a laboratory recorded dark frame. These raw dark frames were taken at ∼22°C for ∼34  ms, during laboratory experiments of UVIT detectors (obtained from J. Postma, 2008, private communication). These frames were having random fluctuations in pixel values with rms of ∼1 DU along with a gradient of ∼200 DU across the columns. The dark frames were used directly without any processing. The response of CMOS sensor has a local pixel-to-pixel variation of 0.39% rms (as per Star250 CMOS sensor data sheet). Such a variation would have very little effect on centroid determination of the photon events and it has not been considered in the simulations.

Fig. 4.—

Fig. 4.— Simulated photon-event footprints on CMOS detector. (A) A single photon event; (B) configuration of few overlapping photon events; (C) section of simulated UVIT data frame corresponding to a sky region of high count rate. The overlapping photon events would lead to incorrect determination of the event centroids and possibly the incorrect number of detected events in a frame. Each pixel in these images corresponds to 3 × 3 arcsec2 on the sky.

3.2. Reconstruction of the Image: Centroid-Finding Algorithms and Energy Thresholds

The reconstruction of the image from UVIT data frames involves two critical steps: (i) Detection of the photon events within a data frame, and (ii) Calculation of the event centroid for the detected events. While in actual practice these two tasks would be performed by the hardware on the payload itself, the same process has been simulated here. Photon events are detected by scanning the UVIT data frames and by applying one of the three event-detection algorithms, namely, 3-cross, 3-square, and 5-square. These algorithms scan the data frame and compare the value of each pixel to its surrounding pixels. The definition of surrounding pixels varies for these algorithms. The 3-cross algorithm uses the adjacent pixels along rows and columns from this central pixel; the 3-square algorithm uses the surrounding 8 pixels in the shape of a 3 × 3 matrix, and the 5-square algorithm takes the surrounding 24 pixels in the shape of 5 × 5 matrix as comparison. Each of the algorithms applies the following three criteria to detect a photon event in a data frame (Hutchings et al. 2007):

  • 1.  
    A pixel may be a candidate for a photon event center if its value is larger than surrounding pixels contained in the centroid window.
  • 2.  
    The energy of this central pixel (defined as pixel value after background subtraction) must exceed a minimum energy threshold to discard the fake events.
  • 3.  
    Another energy threshold is applied to the total energy of an photon event. This is the sum of all the pixel energies in algorithm form. The total energy of a photon event should also exceed this threshold.

If any pixel adjacent to the central pixel in the photon-event footprint has the same value as the central one, it would not be considered as a candidate for photon event center even if it is a genuine footprint. This restriction would lead to a certain fraction of photons (∼0.6%) that would be missing, irrespective of the centroid algorithm used. Event centroids are later calculated using the center of gravity method (Michel et al. 1997) following the various algorithms of pixel configuration. The centroid coordinates are estimated up to much higher resolution than a CMOS pixel. The background level for a detected photon event is estimated from the minimum of four corner pixels of 5 × 5 pixels around the event center.

Imaging performance of UVIT would be greatly affected by the presence of another photon event in the neighborhood of one event in a data frame. This situation is referred as double photon events. In this case, the footprints of the two events would mix, leading to incorrect estimation of the event centroids; thus affecting the angular resolution. Further, if the two events are close enough, one of them may remain undetected. Figure 4B shows footprints of simulated overlapping photon events. Figure 4C depicts the situation in a crowded field. These multiple overlapping photon events would be a major source of concern especially in case of extended sources and it is highly desirable to keep trace of such incidents. Double photon events can be identified by adding another threshold, namely a rejection threshold. It is the difference between the highest and lowest values of the four corner pixels of the 5 × 5 pixel matrix around the event central pixel. In case of genuine single photon events, this difference would expected to be very low; however, the presence of another photon event in the neighborhood would raise the pixel value in one of the corner pixels well above the background. Therefore a high value of corner maximum–corner minimum would be an indication of a double photon event.

In the presence of double/multiple photon events, the statistics of detected events would be different for different centroid algorithms. While the percentage of events detected by the 3-cross algorithm is highest, it is lowest for the 5-square algorithm. This is due to the fact that the 3-cross algorithm has the smallest footprint and thus is least affected by overlapping events (Hutchings et al. 2007). However, the 3-cross algorithm may also give rise to detection of false events due to its shape. For example, in case of a single isolated photon event falling near a corner CMOS pixel, values of the neighboring 2 × 2 pixels would be nearly equal. If, due to fluctuations, the diagonally opposite pixels had values greater than the remaining two pixels, the 3-cross algorithm would detect two events centered at those diagonally opposite pixels. Similarly, two (or more) overlapping photon events could also lead to false event detection with the 3-cross algorithm. It is worth noting that neither the 3-square nor the 5-square algorithms would detect such false events due to their extended footprints.

3.3. Errors in Centroid Determination

The issue of centroid errors in the case of intensified CCD/CMOS based photon-counting detectors has been discussed by several authors (Dick et al. 1989; Michel et al. 1997; Hutchings et al. 2007). As discussed earlier, the centroids of the photon events are calculated with the center of gravity method. In addition to random errors associated with the noise and background variation (Michel et al. 1997), this method also has a systematic bias over the centroid values in form of a "modulation pattern" (Dick et al. 1989; Michel et al. 1997). This modulation pattern would be visible in the reconstructed images in the form of a regular grid structure imposed on the images with grid spacing of one CMOS pixel. Various factors that can cause this modulation pattern have been discussed by Dick et al. (1989). However, the main cause of this in reconstructed UVIT images is exclusion of the wings of photon event footprints by centroid algorithms. As the simulated photon event footprints have the FWHM of ∼1.65 CMOS pixels, the wings of the energy distribution would be omitted by the 3-square and 3-cross algorithms, while the 5-square shape would collect almost all the energy. Therefore, the modulation pattern is clearly visible in the images reconstructed by the 3-square and 3-cross centroid algorithms, whereas images generated by the 5-square algorithm do not suffer from this effect.

The modulation pattern is removed by comparing the theoretical and detected cumulative probability distribution of flat-field centroid data. In absence of any systematic effect, this distribution was expected to be constant. This information was used to find the true centroids of the events, thus removing the systematic bias. For this purpose, over 1,000,000 photon events were simulated randomly on the face of the pixel. Their centroid values were determined by all the three centroid algorithms. The decimal parts of these calculated centroid positions were then used to form the cumulative probability distribution. By comparing the probability density functions of the theoretical and calculated curves, true centroids were determined. This correction was applied to centroid data generated by the 3-cross and 3-square algorithms only, as the modulation pattern was very prominent in these cases.

Random errors are also present in centroid estimation due to noise in the background dark frames. The dark frames have random fluctuations in pixel values with rms of ∼1 DU along with a gradient of ∼200 DU across the columns. This leads to rms error of < 0.01 pixel in the calculated centroid position. To estimate these random errors, 1000 photon events were simulated on the dark background frames, corresponding to a location over the pixel face. Their centroids were determined using each of the three centroid algorithms, with accuracy of 1/64 of a pixel. In case of the 3-cross and 3-square algorithms, necessary corrections were also applied to remove the modulation pattern.

The bias and random errors in event centroids are shown in Figure 5 in form of 2D error maps for the 3-square algorithm. These error maps represent the face of a CMOS pixel with the center being the actual position of the photon. A data point in the error maps represents the detected position of a photon event. Results for three locations on the CMOS pixel are presented: (i) near a corner; (ii) at the midpoint between the center and a corner along the diagonal; and (iii) at the center of the CMOS pixel. The effect of the modulation pattern is clearly visible in the case where the photon events fall near the pixel corner (Fig. 5A). This puts a systematic bias of ∼0.15 CMOS pixel on the calculated position of the photon. Further, this causes the calculated centroids to fall in any one of the four corner pixels. The bias can also be seen in a case when a photon is falling between a corner and the center with value of ∼0.05 pixel (Fig. 5B). A shift of ∼0.01 pixels is found in all cases, due to the precision limit of the simulation (1/64 pixel). The rms of the scatter are between ∼0.01 and ∼0.02 pixels, and is smallest in the case where the photon events fall at the center of the pixel. The error maps in Figure 5 suggest that the accuracy in centroid determination would depend on the location of the photon event over the pixel face. The variation of ∼0.01 to ∼0.02 pixels in centroiding accuracy has been observed from the center of the pixel to corner of the pixel.

Fig. 5.—

Fig. 5.— Error maps for the 3-square algorithmrepresenting the face of the CMOS pixel with center as the original position of the incoming photon. A data point in these maps corresponds to the detected position of the photon with respect to original one. Upper panel (A, B, C) shows the uncorrected data with systematic bias (in form of a modulation pattern); lower panel (D, E, F) corresponds to the data corrected to remove systematic bias. Plots at left (A, D) represent the case when photon is falling at a corner of the CMOS pixel; central plots (B, E) correspond to the case when photon is falling between the center and corner of the pixel (along the diagonal line); plots at right (C, F) represent the case when photon is falling at the center of the pixel. The rms values of the scatter are given along with the plots for each of the axes, in units of CMOS pixel. The CMOS pixel corresponds to 3 × 3 arcsec2 on the sky. (The threshold on central pixel energy used = 150 DU; the threshold on total event energy used = 450 DU).

4. PHOTOMETRIC PROPERTIES OF THE RECONSTRUCTED IMAGES

As explained in § 3, image reconstruction from UVIT data frames involves the use of various energy thresholds. It is anticipated that the photometric accuracy of the reconstructed images would depend on the choice of thresholds. The values of the energy thresholds used for event detection could lead to photometric bias over the pixel face. Also, if the rejection threshold is not chosen properly, the double photon events would be used for image reconstruction, thus deteriorating the image resolution. The results from simulations indicate that the reconstructed images suffer from photometric nonlinearity. Also, this nonlinearity in reconstructed images follows a complex pattern that is closely tied up with values of thresholds used. These effects are discussed in the following subsections.

4.1. Photometric Variation over the Pixel due to Energy Thresholds

The distribution of energy of a photon event over CMOS pixels would depend on the location of the photon event on the face of the pixel. Photon events falling in the center of the pixel would contain a higher amount of their energy within the centroid algorithm shape (as well as in the central pixel) compared to the photon events falling near the edges/corners of the CMOS pixels. Therefore, values of the energy thresholds for central pixel energy and total event energy would result in a selection bias over the face of a pixel; and the probability of selection of photon events would not be uniform over the face of the pixel. This nonuniformity would also vary among all the three centroid algorithms given their different centroid window shape.

To estimate this nonuniformity, photon events are simulated on various locations over the pixel face. A pixel is divided into 16 × 16 cells for this purpose. For the given set of energy thresholds, the fraction of rejected events for each cell is determined. This rejection fraction over the face of the CMOS pixel is shown in Figure 6 for different sets of energy thresholds and for the 3-square algorithm. As shown in Figure 6A, for low values of energy thresholds, the rejection fraction is negligible (< 1%). Hence there would not be any selection bias or nonuniformity over the face of the CMOS pixel. Figure 6B shows the case when the threshold on central pixel energy is kept very low and the threshold on total event energy is kept very high; in this case the selection of the event would be preliminary, decided by the total energy threshold. Here the response for the 3-square algorithm is not uniform over the pixel face and ∼20% of the events falling near the corner would remain undetected. This is due to the high total energy threshold; as for the events falling near a corner of CMOS pixel, a significant amount of the total event energy would fall outside the 3-square algorithm shape. On the other hand, if the threshold on central pixel energy is given a very high value and the threshold on total event energy is kept at a low value, the nonuniformity is clearly visible and prominent over the pixel face. With a high threshold value for central pixel energy, almost all the events falling on the corners of the pixel are rejected and a significant nonuniformity is observed, varying between 65% in the center to 100% on the corners.

Fig. 6.—

Fig. 6.— Plots showing the nonuniformity over CMOS pixel face due to energy thresholds for the 3-square algorithm. These plots show the variation of the fraction of rejected photon events over the CMOS pixel, for different sets of energy thresholds. (A) Central pixel-energy threshold and the total energy threshold are kept at low values of 150 DU and 250 DU respectively; (B) total energy threshold is given a high value of 1050 DU while keeping the central pixel energy at a low value of 150 DU; (C) shows the variation when central pixel energy is kept at high value of 450 DU while the total energy threshold is at low value of 650 DU.

Thus the selection of energy thresholds decides the nonuniformity of event detection over the face of a pixel. It is interesting to note that the other two algorithms would also be affected by the choice of energy thresholds. The 5-square algorithm would collect most of the photon-event energy contained within the footprint (irrespective of its location over the pixel face) and thus would be less sensitive to the total energy threshold. On the other hand, the events falling near the corners of the pixel would sent a significant amount of their energy out of the 3-cross algorithm centroid window, hence making it the most sensitive to the total energy threshold. The threshold on central pixel energy would affect all the centroid algorithms in the same manner, as the same central pixel energy is being used. Although low threshold values would result in a flat response over the pixel face, it would also lead to detection of false events. On the other hand, high thresholds would lead to rejection of genuine events with the selection bias over the pixel.

4.2. Impact of Double Events over Photometric Linearity in Simulated UVIT images

As discussed in § 3.2, double photon event refers to a situation in which two photon events fall very near to each other in a UVIT data frame, such that their footprints overlap. These double or multiple photon events in UVIT data frames can be identified using the rejection threshold; they would affect the final image in various ways such as deteriorating the angular resolution, inaccurate photometry, nonlinearity in the reconstructed images, etc. As will be discussed in this section, in case of sources with a high photon count rate per frame, these double/multiple photon events would not only give rise to photometric nonlinearity but would also lead to complicated patterns in reconstructed images. Given the complex structures of astronomical sources (such as Galaxies), the nature of these effects has been first studied for artificially simulated point sources.

Point sources (up to 1/64'' × 1/64'') with a photon count rate of 25 counts s-1 (∼0.8 counts per frame) were simulated against the average sky background of 0.004 counts s-1 per arcsec2. The data frames were generated for 3000 s with a frame acquisition rate of 30 frames s-1. The effect of optics was not considered in these simulations and modeled satellite drift was added to each of the data frames. The same drift was later removed while reconstructing the final image. However the errors in drift corrections were incorporated. Later the output images of these points sources were reconstructed using data frames by all the three centroid algorithms, with pixel scale of 1/32 arcsec. The central pixel-energy threshold of 150 DU and total energy threshold of 450 DU were used for event detection; with these values of thresholds, the nonuniformity over the CMOS pixel face was found to be < 1% rms. Also, systematic bias due to modulation pattern was corrected in case of the 3-square and 3-cross algorithms, as discussed in § 3.3.

The double/multiple photon events are responsible for nonlinearity in photometry of reconstructed images. It turns out that the presence of a strong source would severely affect the photometry of neighboring regions, giving rise to photometric distortion/error in the reconstructed images. To quantify the consequences of such a case, the ratio of final reconstructed images to the corresponding true image (constructed with known positions of all the photons on the detectors, irrespective of single/double events) was calculated after the smoothing of both images through convolution from Gaussian functions. Both of the images were convolved with two Gaussian functionswith σ of 0.5'' and 0.25''. The first Gaussian convolution was truncated up to ± 2σ on either side, while next Gaussian was truncated at ± 3σ.

Figure 7 show these ratio maps for the 3-square algorithm, using the rejection thresholds of 40 DU and 500 DU (see § 3.2), respectively. In both maps, the central 1 × 1 CMOS pixel area containing the source has been masked. These ratio images show very interesting and complex structures of the photometric properties of the reconstructed images. The reduction in background intensity is observed to be as low as ∼30%. As the probability of losing a background photon due to another background photon is negligible; the presence of a strong source is the main cause of such deficiency of background photons. In fact, it turns out that a strong source would affect the photometry of neighboring regions more severely compared to its own photometry. As an estimate, for the source count rate of ∼0.8 counts per frame and the sky background taken as described in this section, the probability that a background photon and source photon occur in same frame is ∼57%, whereas the probability of two or more source photons occurring in the same frame is ∼20%. Ratio maps in Figure 7 show a significant effect of rejection threshold over such photometric structures. In Figure 7A where the rejection threshold is kept at low value (40 DU), it would reject most of the possible double events. Since the constraint on double photon events is strict. it would also reject the barely overlapping events, thus extending the region of source effect. In contrast, in Figure 7B, due to the high value of the rejection threshold (500 DU), the majority of photon events would be selected, even the corrupted ones, thus reducing the extent of photometric structures.

Fig. 7.—

Fig. 7.— Ratio maps showing the photometric error in the background due to presence of a strong point source for the 3-square algorithm, generated by taking the ratio of final reconstructed image to the image created with known positions of all the photons on the detector. The effects of optics has not been considered, thus these maps indicate the effects of detector parameters and satellite drift on image reconstruction. The errors in drift corrections were incorporated. (A) For rejection threshold of 40 DU; (B) for rejection threshold of 500 DU. X and Y axes are in units of CMOS pixels. The central region of one CMOS pixel containing the source has been masked. The bar on top-left corner corresponds to 3'' on the sky. (Threshold on central pixel energy = 150 DU; threshold on total event energy = 450 DU).

It is found that the presence of double or multiple photon events in UVIT data frames could lead to photometric distortion/error in reconstructed images. Further, it is shown that a point source with a high count rate would give rise to complex structures in a uniform background in its neighborhood. Also it is anticipated that in case of extended sources, these structures would be of even more complex nature; in particular, the neighborhood of hot spots in the source would be affected in a very adverse way. This is discussed in § 5.

5. PHOTOMETRIC EFFECTS ON SIMULATED EXTENDED SOURCES

All of the effects discussed above give a detailed picture of the characteristics of reconstructed images. However, the actual astronomical sources (e.g., galaxies, etc.) are known to have more complex structures with regions of varying intensities, background, etc. Hence photometric distortion present in these cases would also expected to be complex in nature. To explore the effects of the various factors discussed in this case, an ultraviolet image (FUV band) of the M51 Galaxy from the GALEX data archive1 is used for the simulations. Necessary corrections are applied to photon count rates of this archival image such that the same galaxy is being observed by UVIT at a distance three times greater than the actual one. This is done to match the scale of this input image with the UVIT scales, so that 1 pixel of input image corresponds to 0.5'' of sky. A simulated sky image is generated from this input image using Poisson statistics (as described in § 3), to be further processed within the simulated UVIT subsystems. A comparison of input image and simulated image is shown in Figures 8A and 8B. The integration time of 3000 s is used, with a frame acquisition rate of 30 frames s-1. Both images are convolved with a Gaussian function with σ 0.5'' for smoothing. The reconstructed images are shown in Figures 8C and 8D. The effect of modulation pattern (discussed in § 3.3) is visible in Figure 8C in the uncorrected reconstructed image using the 3-square algorithm. The superimposed grid structure has a frequency of one CMOS pixel. The corresponding corrected image is shown in Figure 8D.

Fig. 8.—

Fig. 8.— The input and simulated images of a Galaxy. Archival ultraviolet image of M51 Galaxy from GALEX data archive is used for simulation. (A) Input image, obtained after applying necessary correction to the archival image; (B) simulated image of the input image, using Poisson statistics. This image is to be processed within UVIT subsystems. Both the images correspond to 3000 s integration time with the frame rate of 30 frame s-1 and are convolved with Gaussian function with σ of 0.5''. (C) Uncorrected reconstructed image of the same galaxy using 3-square algorithm. The superimposed grid pattern is the modulation pattern discussed in § 3.3 with frequency of one CMOS pixel. (D) The corresponding corrected image. Pixel scale for all images is 0.5''pixel-1.

To estimate photometric accuracy and distortion in the reconstructed image, a section of the galaxy's spiral arm (from the top-left corner) is chosen (Fig. 9). The images of this region are reconstructed with the 3-square algorithm for rejection thresholds of 40 DU and 500 DU and with pixel scale of 1/8 arcsec. Further, a true image of the same region is also constructed with known positions of all the simulated photons, on the detector. For smoothing purposes, the true and reconstructed images are convolved with a Gaussian function having σ of 0.5''. The ratio of reconstructed images to this true image is calculated to quantify the photometric accuracy.

Fig. 9.—

Fig. 9.— Images showing the variation in photometric errors in the reconstructed images with rejection threshold. Upper panels Rejection threshold of 40 DU; lower panels rejection thresholds of 500 DU. Left (A, D)The images constructed using "true" positions of the photons on the UVIT detectors; central (B, E) the reconstructed image using 3-square algorithm; right (C, F) are the ratio of reconstructed images to the "true" image. All images are convolved with Gaussian function, σ of 0.5''. The reconstructed image with lower rejection threshold of 40 DUshows large photometric errors. With the higher rejection threshold (500 DU), most of the double/multiple photon events are selected for image reconstruction, reducing the photometric error. It is interesting to note that the location of minima in the ratio images are away from the position of sources, indicating a complex structure of photometric errors.

The effect of a strong source on the surrounding background (as discussed in § 4) is clearly visible here in the image reconstructed with a rejection threshold of 40 DU (Figs. 9B and 9C). In this case, the regions away from the source show good recovery of photon events (as the ratio is ∼1) leading to accurate photometry. However, the nearby regions around the source suffer from loss of photon events, and these are the regions where the photometry is most corrupted. For the section of galaxy chosen, this loss is as high as ∼40%. It is to be noted that the regions of inaccurate photometry are not symmetric around the source, indicating that the intensity profile of the source could generate photometric distortion around its surrounding area in a complicated shape. Also the source itself suffers from photometric saturation effects and only ∼80%–90% of the source photons are recovered. However, with rejection threshold of 500 DU, such variations are minimized and photometric recovery is better than 85% in and around the source (Figs. 9E and 9F). The large fluctuations in the ratio images away from the sources is due to small number statistics as the sky background is extremely low (∼10-5 photons s-1 per arcsec2). Though both the true and reconstructed images contain roughly the same number of photons in the background away from the source; a small error in a photon's actual position would cause large fluctuations in the ratio images (Figs. 9C and 9F).

It is to be noted that although the input GALEX image has a count rate of ∼0.05 counts s-1 per arcsec2 (i.e., 0.015 counts per frame per CMOS pixel, reconstructed images still show significant photometric errors. The reason for this lies in the shape of centroid algorithms. During the observations, it is the count rate within the centroid algorithm shape that would determine the fraction of double or multiple events and hence the amount of photometric error in reconstructed images. As an example, for the 3-square algorithm, the sky area within its algorithm shape is 9 pixels, leading to a count rate of ∼0.13 counts per frame. Further, as part of the photon event lies outside the 3-square window, for low values of rejection threshold, photometry at any point would be affected by the presence of photon events in a larger area. This is evident from the reconstructed images shown in Figure 9. A number of star-forming galaxies with strong and extended H II regions would be expected to show such photometric errors.

6. ANGULAR RESOLUTION OF THE SIMULATED IMAGES

The angular resolution of the UVIT images is estimated by simulating the point sources as described in § 4.2, including the effects of optics and satellite drift. It is found that the angular resolution of the reconstructed images is heavily dominated by the PSF of the optics and detector. The other factors such as error in the satellite drift correction, errors in centroid estimation, etc., only have a small effect. A 2D Gaussian function fit to the PSF profile yields σ ∼0.7'' on either axis. The PSF is found to be independent of the choice of centroid algorithm and rejection threshold. The σ of the output images does not differ much from the input value of 0.7''. This is consistent with the errors of centroiding (§ 3.3) and errors in the satellite-drift correction (§ 2). However, double/multiple photon events could change the profile of the simulated PSF if the source is of high photon count rate per frame. It is found that an average photon count rate of 2 counts per frame could reduced the σ of the PSF up to < 0.5''.

To see the effect on extended sources, a Hubble ACS B band image2 of the same galaxy (as used in § 5) is simulated with pixel scale of 0.05'' per pixel. The input and output reconstructed images of a portion of the galaxy's spiral arm are shown in Figure 10 for integration time of 6000 s. It is evident that the sources separated by ∼3.0'' can easily be resolved in the reconstructed image. The convolution of input image with the simulated PSF of the UVIT is also shown for comparison. The reconstructed and convolved images look very similar to each other, confirming the effect of PSF on the reconstructed images.

Fig. 10.—

Fig. 10.— Input and simulated images of a portion of the Galaxy's spiral arm, showing the effect of UVIT PSF on the image. (A) Input Hubble ACS image in B band. This image is convolved with PSF of the UVIT. (B) The resultant image. (C) The reconstructed image using 3-square algorithm with rejection threshold of 500 DU. The pixel scale is 0.05''pixel-1, integration time is 6000 s. All the images are convolved with Gaussian function having σ of 0.2'' for smoothing purpose.

7. SUMMARY

The results presented in this work are based on the numerical simulations performed to evaluate the expected performance of the Ultra Violet Imaging Telescope (UVIT). It is shown that the imaging characteristics of UVIT would be affected by a number of factors. Apart from performance of the optics, the other important factors are the drift of the satellite and functioning of photon-counting detectors. The final image is to be reconstructed with the UVIT data frames using some centroid algorithms. Further the satellite drift has to be removed from the UVIT data frames before using them for image reconstruction. Observations of some sky fields containing point sources are simulated through the visible channel of the UVIT. The centroids of the sources in the field are used to track the satellite drift. The results from these simulations indicate that drift of the satellite could be recovered with an accuracy of < 0.15''.

UVIT data frames are generated including the effects of satellite drift, optics, and photon-counting detectors. The centroids of photon events in data frames are determined by three different algorithms (namely 5-square, 3-square, and 3-cross) with different pixel shape, and by using the center of gravity method. The fluctuations in the background of dark frames and in photon-event footprint could result in errors in centroid estimations. These errors are estimated to be ∼0.01 CMOS pixel or ∼0.03''. A modulation pattern has been seen in the centroid determination process with the 3-square and 3-cross algorithms. It has been corrected before reconstructing the image.

The characteristics of reconstructed images from UVIT observations and effects of various thresholds on image reconstruction are studied by simulating the archival images of a galaxy from the GALEX and HST ACS data archives. As a galaxy (or other extended astronomical sources in general) is of much complex nature; several artificial point sources are also simulated to explore these effects. Results from these simulations show that the photometry of the reconstructed images would depend on the choice of the two energy thresholds used by centroid algorithms. Values of energy thresholds could lead to photometric bias over the face of the CMOS pixel. By choosing suitable energy thresholds, one could reduce such variations to < 1% over the pixel face. Further, the photometric accuracy of the reconstructed images would be affected due to appearance of double or multiple photon events, which occur with a significant probability in and around bright sources. It turns out that presence of a strong source could also produce complicated photometric patterns in the surroundings. These patterns would depend on the choice of rejection threshold used to reconstruct the image. The photometric distortion in the reconstructed images of galaxy is complex in nature and depends on the intensity profile of bright sources present in the galaxy. The regions away from such bright sources offer nearly accurate photometry, while the nearby regions of such sources suffer from higher photometric inaccuracies. The results of simulations shows that the photometric accuracy in these regions can be as low as ∼60% with the strict rejection threshold (40 DU). Also, the strong sources in the galaxy themselves suffer from photometric saturation effects, causing the recovery of ∼80%–90% of total photon events. However, by using a higher rejection threshold of 500 DU (and thus including most of the double/multiple events in image reconstruction), photometry recovery better than 85% is obtained.

The PSF of the reconstructed images follows a two-dimensional Gaussian profile with σ of 0.7'' on either axis, and is dominated by the performance of the optics and detector. Further, the PSF is found to be same with different centroid algorithms and different rejection thresholds. However, the presence of double/multiple photon events could also cause the PSF profile to vary in case of bright sources. The simulations based on a HST ACS image show that sources separated by 3.0'' are clearly resolved in the reconstructed images.

It is of interest to note that the results of this study are not limited to the performance of UVIT, but are also applicable to any CCD or CMOS readout-based photon-counting detectors in general, that work on a fixed readout rate of frames.

Based on observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. Some of the data presented in this article were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. We are thankful to Joe Postma (University of Calgary, Canada) for providing us necessary software tools for centroid determination and experimental data of UVIT detectors. We also thanks Shri Harish of ISRO Satellite Centre (ISAC), Bangalore for the simulation data of satellite drift. MKS thanks the Council for Scientific and Industrial Research (CSIR), India, for the research grant award F.NO.9/545(25)/2005-EMR-I.

Online Material

  • Figure 7
  • Figure 9
  • Figure 10

Footnotes

  • See http://galex.stsci.edu/GR4.

  • Taken from the HST data archive: http://archive.stsci.edu/hst.

Please wait… references are loading.
10.1086/603543