Articles

THE TAOS PROJECT: RESULTS FROM SEVEN YEARS OF SURVEY DATA

, , , , , , , , , , , , , , , , , , , , , and

Published 2013 June 14 © 2013. The American Astronomical Society. All rights reserved.
, , Citation Z.-W. Zhang et al 2013 AJ 146 14 DOI 10.1088/0004-6256/146/1/14

1538-3881/146/1/14

ABSTRACT

The Taiwanese–American Occultation Survey (TAOS) aims to detect serendipitous occultations of stars by small (∼1 km diameter) objects in the Kuiper Belt and beyond. Such events are very rare (<10−3 events per star per year) and short in duration (∼200 ms), so many stars must be monitored at a high readout cadence. TAOS monitors typically ∼500 stars simultaneously at a 5 Hz readout cadence with four telescopes located at Lulin Observatory in central Taiwan. In this paper, we report the results of the search for small Kuiper Belt objects (KBOs) in seven years of data. No occultation events were found, resulting in a 95% c.l. upper limit on the slope of the faint end of the KBO size distribution of q = 3.34–3.82, depending on the surface density at the break in the size distribution at a diameter of about 90 km.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

The size distribution of Kuiper Belt objects (KBOs) has been accurately measured down to diameters of D > 30 km (Fuentes et al. 2009; Fuentes & Holman 2008; Fraser & Kavelaars 2008; Fraser et al. 2008; Bernstein et al. 2004; Luu & Jewitt 2002), while Fraser & Kavelaars (2009) have provided measurements down to diameters D ≳ 15 km. The size distribution of large KBOs approximately follows a power law of the form:

Equation (1)

where the slope q = 4.5 down to a diameter of D ≈ 90 km. A clear break in the size distribution has been detected near 90 km (Fuentes et al. 2009; Fraser & Kavelaars 2009; Bernstein et al. 2004), giving way to a shallower distribution for smaller objects. Various models of the formation of the Kuiper Belt have been developed (Schlichting et al. 2013; Benavidez & Campo Bagatin 2009; Kenyon & Bromley 2001, 2004, 2009; Pan & Sari 2005; Benz & Asphaug 1999; Kenyon & Luu 1999a, 1999b; Davis & Farinella 1997; Stern 1996; Duncan et al. 1995) which predict this break in the size distribution, but these models make vastly different predictions on the size distribution for objects smaller than the break diameter. These models are based on different scenarios of the dynamical evolution of the solar system and the internal structure of the KBOs themselves. It is generally assumed that the outward migration of Neptune stirred up the orbits of objects in the Kuiper Belt, which subsequently underwent a process of collisional erosion, giving rise to a shallower size distribution. The simplest models also predict a small-end power-law size distribution as shown in Equation (1), where the slope q depends primarily on the internal strength of the KBOs. Solid objects held together by material strength give rise to larger slopes, while the weaker gravitationally bound rubble piles are more easily broken up and give rise to smaller values of q. A measurement of the size distribution of smaller objects (down to D ≳ 100 m) is needed in order to constrain these models. Furthermore, such a measurement would provide important information on the origin of short-period comets (Volk & Malhotra 2008; Tancredi et al. 2006; Duncan & Levison 1997; Levison & Duncan 1997; Morbidelli 1997; Holman & Wisdom 1993).

The direct detection of such objects is difficult because they are extremely faint, with typical magnitudes R > 28, and are thus invisible to surveys using even the largest telescopes. However, a small KBO will induce a detectable drop in the brightness of a distant star when it passes across the line of sight to the star (Chang et al. 2006, 2007, 2013; Schlichting et al. 2009, 2012; Wang et al. 2009, 2010; Bianco et al. 2009, 2010; Bickerton et al. 2008, 2009; Liu et al. 2008; Zhang et al. 2008; Nihei et al. 2007; Roques et al. 1987, 2003, 2006; Cooray 2003; Cooray & Farmer 2003; Roques & Moncuquet 2000; Brown & Webster 1997; Bailey 1976). High-speed photometry is needed to detect these events, given that we expect a typical event duration of about 200 ms. Detection of such events is further complicated by the fact that the sizes of the objects of interest are on the order of the Fresnel scale (about 1.4 km for our median observed wavelength of about 600 nm and a typical object distance of 43 AU), and the resulting light curves exhibit significant diffraction features (Nihei et al. 2007). The goal of the Taiwanese–American Occultation Survey (TAOS) project is to detect such occultation events and measure the size distribution of KBOs with diameters 0.5 km < D < 30 km.

To date, 18 events consistent with occultations by objects in the outer solar system have been detected. The three detections claimed by Roques et al. (2006), if they are occultations, are unlikely to have been caused by KBOs (the distances are inconsistent). On the other hand, two detections reported by Schlichting et al. (2009, 2012) are consistent with occultation events by 1 km diameter KBOs at 45 AU. Schlichting et al. (2012) found one additional event consistent with a KBO occultation, but they reject this event because of its high inclination (it was detected at an ecliptic latitude of b = 81fdg5). The remaining 12 possible events (Chang et al. 2007) were found in archival RXTE measurements of Sco X-1. However, the interpretation of these detections as occultation events has been challenged (Jones et al. 2008; Blocker et al. 2009).

TAOS has been in operation since 2005 February. The system is described in detail in Lehner et al. (2009). In summary, the survey operates four 50 cm telescopes at Lulin Observatory in central Taiwan. The telescopes are very fast (F/1.9), with a field of view of 3 deg2. Each telescope is equipped with a camera which utilizes a single 2k × 2k CCD imager. All four telescopes monitor the same stars simultaneously at a readout cadence of 5 Hz (with the readout cadence synchronized among the four telescopes). We require that any events be detected simultaneously in each of the telescopes in order to minimize the false positive rate (Lehner et al. 2010).

High-speed cadence is achieved using a special CCD readout mode called zipper mode, which is described in detail in Lehner et al. (2009). To summarize, instead of taking repeated exposures, we leave the shutter open for the duration of a data run. At a cadence of 5 Hz, we read out a subimage comprising 76 rows, which we have called a rowblock. When the rowblock is read out, the remaining photoelectrons on the CCD are shifted down by 76 rows as well. If one considers starting with a rowblock at the far edge of the CCD, photoelectrons are collected and then shifted to the next rowblock. Photoelectrons are then collected in addition to those from the first rowblock, and then they are shifted to the next. By the time the original photoelectrons are read out, photoelectrons have been collected from every star on the focal plane in a single rowblock, although at different epochs.

While zipper-mode readout allows us to image every star in the focal plane, there are two characteristics of the data that reduce the signal-to-noise ratio (S/N). The first is that sky background that is collected as a rowblock is shifted across the focal plane, while photoelectrons from a single star are only collected at one location. Furthermore, it takes about 1.3 ms to read out a single row, corresponding to about 95 ms for an entire rowblock, so the actual exposure of a star is only 105 ms, while sky background is collected for a total of 5.4 s. Second, flux is also collected from the stars during the readout stage, and the brighter stars will leave significant numbers of photoelectrons in the pixels as they are being shifted. This flux will appear as bright vertical streaks in the images (see the upper left panel of Figure 1). Nevertheless, zipper mode is successful in collecting high S/N photometric data on enough stars for the project to successfully probe a significant fraction of the allowed size distribution parameter space.

Figure 1.

Figure 1. Top left: horizontal subsections of two consecutive rowblocks with a bright moving object in the center. (The boundary between the rowblocks is visible as a dark horizontal line bisecting the image. This is a slight excess in dark current from the edge of the CCD.) The vertical streaks (see the text) from the brighter stars are also evident. Top right: the same rowblock subsections after the background streak subtraction. While the streaks from the bright stars are removed, the moving object itself is not completely removed, and the background flux near the moving object is oversubtracted. Bottom left: gray-scale image representation of the background flux subtracted from the rowblocks near those shown in the top panels. Here, each row represents the flux subtracted from a single rowblock, while the columns correspond to those in the top panels. The vertical streaks in this image are the streaks left by the stars during the zipper-mode readout. The moving object in original data cause the background to be oversubtracted, and the oversubtracted flux clearly shows the trajectory of the moving object across the focal plane. Bottom right: the same background image from the bottom-left panel after applying the high-pass mean and variance filters to remove the streaks from the bright stars. The oversubtracted background flux from the moving object is clearly evident.

Standard image High-resolution image

In this paper, we report on the analysis of seven years of zipper-mode data collected by the TAOS system. In Section 2, we describe the data set used in this analysis. In Section 3, we provide a detailed discussion of the analysis pipeline. In Section 4, we present the results of the analysis of the data set, and finally in Section 5, we discuss our future plans for the survey.

Throughout the remainder of this paper, we use the following definitions. A data run is a series of high-cadence multi-telescope measurements on a single field, typically for 1.5 hr at a time. A light curve is a time series of photometric measurements of a single star from one telescope in one data run, and a light curve set is a set of multi-telescope photometric measurements of a single star during one data run. A flux tuple is a set of multi-telescope measurements (with a minimum of three telescopes) of one star at a single epoch. A light curve set is thus a time series of flux tuples.

2. THE SEVEN YEAR DATA SET

Earlier results from the TAOS search for small KBOs were reported in Zhang et al. (2008) and Bianco et al. (2010). These two papers describe results obtained from operating three telescopes. The data set used in Bianco et al. (2010) ended on 2008 August 2, when the fourth telescope became operational. The addition of the fourth telescope to the array was beneficial in two ways. First, using four telescopes increases the significance of any candidate occultation events, while the false positive rate remains constant (Lehner et al. 2010). Second, we have experienced frequent problems with the reliability of our cameras and telescopes. With four telescopes, we can still operate with three telescopes when one of the systems is shut down for repair. Given that a minimum of three telescopes is necessary to keep the false positive rate low enough (Lehner et al. 2010), we can still collect useful data in the event that there is a problem with one of the systems. The overall operational efficiency of the survey thus improved significantly since we could still collect data when one telescope was offline.

The current data set is summarized in Table 1, along with the data sets used in Zhang et al. (2008) and Bianco et al. (2010). The current data set is more than 2.3 times larger than that used in Bianco et al. (2010). Ninety percent of the photometric data in this set were acquired within 6° of the ecliptic in order to maximize the event rate. The results quoted in this paper are thus applicable to both the hot (inclination i > 5°) and cold (i < 5°) KBO populations.

Table 1. Data Set Parameters

Parameter Zhang et al. (2008) Bianco et al. (2010) This Work
Start date 2005 Feb 7 2005 Feb 7 2005 Feb 7
End date 2006 Dec 31 2008 Aug 2 2011 Sep 8
Data runs 156 414 858
Light curve setsa 110,554 366,083 835,732 (209,130)
Total exposure (star hours)a 152,787 500,339 1,159,651 (292,514)
Photometric measurementsa 7.8 × 109 2.7 × 1010 6.7 × 1010 (2.1 × 1010)
Flux tuplesa 2.6 × 109 9.0 × 109 2.1 × 1010 (5.3 × 109)

Note. aValues for four-telescope data shown in parentheses.

Download table as:  ASCIITypeset image

3. DATA ANALYSIS

The data analysis took place in three stages: photometric reduction of raw image data, the search for occultation events, and the detection efficiency simulation. These stages roughly follow the same procedures outlined in Bianco et al. (2010), but several improvements were made to the pipeline in this round of data analysis. The new pipeline is described in the following subsections.

3.1. Photometric Reduction

For the first step in the pipeline, a custom photometric reduction pipeline (Zhang et al. 2009) is used to measure the brightness of each star in the field at each epoch, and the resulting measurements are assembled into a time-series light curve for each star. For every star, the light curve from each telescope is combined into a light curve set. Next, bad data points (e.g., measurements where the star lies near the edge of the focal plane) are removed from the light curves (see Zhang et al. 2009, for details).

A significant improvement added to this stage in the process is the automated flagging of photometric measurements affected by bright objects (such as satellites or airplanes) moving across the field of view. As discussed in Bianco et al. (2010), several candidate events were found in the previous data set. Visual inspection of the relevant images revealed that these events were caused by bright moving objects passing near the star that appeared to be occulted. The new algorithm to detect these bright moving objects is described in Section 3.1.1.

3.1.1. Bright Moving Object Detection

As mentioned in Section 3.1, bright objects moving across the field of view during zipper-mode image collection can give rise to a large number of false positive events. This is caused by the background subtraction in the photometric reduction. As discussed in Section 1, the zipper-mode readout has the disadvantage that the brighter stars leave streaks along the columns in the images due to the finite time it takes to read out a row from the CCD. These streaks are removed during the photometric reduction by subtracting an estimate of the mode of the column from each pixel in the column, where the mode is calculated for each zipper-mode rowblock (Zhang et al. 2009). A bright moving object passing over a particular column causes the mode of that column to be overestimated, and the background is thus oversubtracted, causing an artificial drop in the measured brightness of any star whose aperture includes that column (see the top panels of Figure 1).

In order to detect these events, we save the 3σ-clipped column mean for each rowblock and column on all four cameras. These background data are stored in FITS format, and an example is shown in the bottom left panel of Figure 1. Note that each column in this image corresponds to an actual column on the CCD, but the row corresponds to a rowblock in the zipper-mode data. The pixel value is simply the column mean (after 3σ clipping) for that rowblock and column. The streaks from the bright stars are evident in the image. Also clearly visible is the object shown in the top panels moving across the image. We note that what appears as a moving object is actually the high value for the column average due to the moving object. Nevertheless, it is straightforward to identify which columns in which rowblocks are affected by the overestimated background.

First, we normalize the image by applying the same rolling clipped mean and variance filter to each of the columns we use to remove the slowly varying trends along the light curves (Lehner et al. 2010). That is, we replace each column average value bmn, where m is the rowblock index and n is the column, with a normalized value dmn, defined as

Equation (2)

where μmn is the local (3σ-clipped) mean within column n and σmn is the local variance within column n, calculated with window sizes of 21 rowblocks. Note that σmn is calculated after the mean has been subtracted from every rowblock background value along the column.

The bottom right panel of Figure 1 shows the resulting values of dmn after application of this filter to the image shown in the bottom left panel. We then apply an algorithm (Lutz 1980)17 to find objects in the normalized background image. We define an object as either a set of 20 or more adjoining pixels with values of dmn larger than 1.5, or as a set of 5 or more pixels with values larger than 2.5. We then flag any photometric measurements where a column affected by a moving object lies within the aperture used in the photometry. The removal of all flagged data points results in the elimination of all false positive events caused by moving objects.

3.2. Event Search

3.2.1. Rank Product Statistic

Our event selection algorithm uses the rank product statistic, which is described in detail in Lehner et al. (2010). This statistic takes advantage of the multi-telescope data to give an exact value for the significance of any candidate event, even if the underlying distribution of the flux measurements is not known. Given that TAOS samples at a 5 Hz readout cadence, we cannot resolve the features of any candidate events in our light curves. Any occultation event we detect would typically appear as a small drop in the flux for one or two consecutive measurements (although occultation events by larger objects could contain up to five or six consecutively measured flux drops). We therefore need to have an accurate estimate of our false positive rate in order to report a credible result.

To search for events in the data, we calculate the rank product statistic on each tuple in the entire data set. To calculate this statistic, we take a time series of fluxes $f_1, f_2, \ldots, f_{N_\mathrm{P}}$ (where NP is the number of points on the time series), and replace each measurement fj with its rank rj. The lowest measurement in the time series will be given a rank of 1, and the brightest measurement will be given a rank of NP. For a given light curve set, we thus replace the time series of flux tuples with a time series of rank tuples of the form

where T is the number of telescopes used. We do not know the underlying distribution of the flux tuples in each light curve set, but, assuming the light curves meet the correct requirements (discussed in Section 3.2.2), we do know the exact distribution of rank tuples, and we can thus calculate the significance of any candidate event exactly.

The rank product statistic is calculated as

Equation (3)

If one assumes that the rank distribution for a single light curve is continuous, then this statistic follows the gamma distribution. Given that the ranks are discrete, the approximation fails for large values of z; however, the probability distribution can be calculated exactly using simple combinatorics.

In the case of an occultation event, one expects the flux to drop below the nominal value simultaneously in each of the telescopes. The corresponding rank tuple would have lower values for all of the ranks, and the resulting rank product would be significantly smaller than average, leading to a large value of our test statistic z. We set a threshold zt for event detection based on choosing an upper limit on the expected false positive rate, such that

Equation (4)

where Z is a random variable chosen from the rank product probability distribution, 0.25 is our limit on the expected number of false positive events in the data set, and NT is the total number of tuples used in the event search.

3.2.2. Light Curve Filtering and Data Quality Checks

The rank product test statistic follows the expected distribution for an individual light curve set if and only if the following conditions are satisfied (Coehlo 2010).

  • 1.  
    The distribution of measurements in each light curve in the light curve set is stationary.
  • 2.  
    Each light curve in the light curve set is ergodic in mean.
  • 3.  
    Each light curve in the light curve set is independent of the other light curves in the light curve set.

However, none of the light curve sets in our data set meet these requirements. Changes in the transparency of the atmosphere throughout the course of our typical 1.5 hr data runs induce fluctuations in the light curves, rendering them non-stationary. Furthermore, these fluctuations are correlated between the different telescopes, so the light curves are not independent of each other. We therefore apply a rolling mean and variance filter to each point in the light curves, thus replacing each flux measurement fj with hj, given by

Equation (5)

where μj is the local mean and σj is the local variance at point j, calculated with window sizes of 33 and 151, respectively. Note that the variance is calculated after the rolling mean was subtracted, as was done in Equation (2). In most cases, this filter removes all of the slowly varying trends, and the resulting filtered light curves h meet the requirements for the application of the rank product statistic.

However, there are still some data runs where fast moving cirrus clouds can induce simultaneous variations in the light curves that are not removed by the high-pass filter. We have therefore devised a pair of tests (Lehner et al. 2010) to identify light curve sets where the filtered light curves are not independent (and likely not stationary as well, given that the fluctuations in the light curves from the cirrus clouds are not constant over time). The tests are based on the fact that the rank tuples should be distributed uniformly over the T-dimensional rank space (recall that T is the number of telescopes used). For the first test, we divide the T-dimensional rank space uniformly into a T-dimensional grid. If the light curves are independent, we expect that each cell in the grid would contain the same number of rank tuples. We thus calculate the Pearson's χ2 statistic on the number of tuples in each cell of the grid. For the second test, we only look at the number of rank tuples in the lowest ranked cell in each dimension. When looking at all of the light curve sets in a given data run, the results from the first test should follow the χ2 distribution, and the results from the second test should follow the hypergeometric distribution. (However, this is only true if the light curves are independent and identically distributed, which is a stricter requirement than stationary. We have thus devised a blockwise bootstrap method to calculate the expected distributions. We still call the tests the Pearson's χ2 test and the hypergeometric test.) We can thus calculate a p-value for both tests for each data run to see how well the two test statistics match the expected distributions. Histograms of these p-values are shown in Figure 2 for each test. If the statistics match the distribution, we would expect a uniform distribution of p-values in the range of [0, 1]. The large spikes evident in the lowest bins of each histogram are the data runs that have correlated fluctuations in the light curves that are not removed by the high-pass filter. This leads to a non-uniform distribution of rank tuples, meaning that the test statistics for each of the light curve sets does not follow the expected distribution. We thus remove every data set with a p-value less than 0.01. Note that this means we would only expect to remove 1% of our data runs due to random chance.

Figure 2.

Figure 2. Histograms of p-values for the Pearson's χ2 test (left panel) and hypergeometric test (right panel) for each of the data runs in the data set.

Standard image High-resolution image

In order to perform these tests, we require enough high S/N stars to accurately calculate these two test statistics. We thus set a minimum of 10 stars with S/N > 10 in the data run. Any data run that does not meet this requirement is excluded from further analysis. Visual inspection of light curves in such data runs indicate that there are very few stars, and the S/N is compromised, usually by very bright sky during a full moon. These data runs are not expected to contribute significantly to our effective sky coverage, so we lose little by excluding them from the final results. Furthermore, as will be discussed in Section 3.2.4, most of the stars in these data runs will end up being excluded anyway.

3.2.3. Detection of Multipoint Occultation Events

The rank product statistic described in Section 3.2.1 only works on one rank tuple at a time. This has the disadvantage of being inefficient at detection of longer duration occultation events (such as objects with D ≳ 5 km, scattered disk or Oort Cloud objects beyond 100 AU, or events measured at large opposition angles). However, as was shown in Coehlo (2010) and discussed in Lehner et al. (2010), if we replace a stationary light curve with a moving average of the form

Equation (6)

where hj is the light curve after high-pass filtering and w is the rolling average window size, then the resulting light curve will also be stationary, and the rank product statistic will still be applicable. By applying a moving average, we can increase the significance of any candidate event since the average light curve will exhibit a more significant drop in the measured flux when the averaging window is centered on the event.

The effect of using different window sizes is illustrated in Figures 3 and 4, which show the results of the application of moving averages on actual light curves with simulated events added in. (Note that these simulated events include diffraction effects that are not strongly evident in the light curves due to the fact that they are integrated out with the 5 Hz sampling cadence.) We have used three different window sizes w = 2, 3, and 5, in order to optimize our detection efficiency. We also looked at the light curves with no average applied, which we will refer to as using a window size w = 1 in order to simplify the discussion. For the 1 km diameter simulated event, we detect the event using w = 1, 2, and 3. For such a small object, using w = 5 smoothes out the signal, while amplifying other small fluctuations, so that the event does not pass the cuts. For the 3 km and 8 km diameter objects, the events are detected only with w = 1 and 2. For the 30 km object, which is longer in duration than the other simulated events, the signal is not detectable for w = 1, but it is detected with w = 2, 3, and 5. (Note that for each diameter, we have used different light curve sets to add the simulated events. These example light curve sets have different S/N values, and thus no comparison should be made between the relative filter strengths among the different diameters.)

Figure 3.

Figure 3. Light curves with simulated occultation events from 1 km (top) and 3 km (bottom) diameter objects added in, before and after application of the moving average. In each panel, the top light curve is the simulated event, the second light curve has no moving average applied, and the next three have moving averages with window sizes 2, 3, and 5, respectively.

Standard image High-resolution image
Figure 4.

Figure 4. Same as Figure 3, but for 8 km (top) and 30 km (bottom) diameter objects.

Standard image High-resolution image

It is clear that the different window sizes increase the sensitivity to different size objects. However, while the effective sky coverage of the survey can be increased by using a wide array of window sizes, this will also increase the false positive rate (Lehner et al. 2010). Running each multiple window size on each light curve set is not an independent test, given that events can be detected using more than one window size, as demonstrated in Figures 3 and 4. It follows that any false positive event could be detected multiple times using different window sizes. However, given that we do not know the underlying distribution of flux values in the data set, we cannot estimate the rate at which this may occur. Therefore, instead of estimating the true false positive rate, we can only set an upper limit on the false positive rate by assuming that searching for events in a light curve set after applying the moving average is a completely independent test for each value of w. If we have a total of Nw window sizes, we thus modify Equation (4) as

Equation (7)

So the more window sizes that are used, the tighter the detection threshold must be. The optimization of the selection of the window sizes used in this analysis will be discussed in Section 4.

Note that the data quality tests described in Section 3.2.2 must be reapplied for each window size w. As shown in Lehner et al. (2010), application of the moving average can highlight small correlations between the telescopes that are insignificant in the unaveraged data. Therefore, using larger window sizes will require removing more data runs from the analysis. In cases where we use more than one window size, we always choose to be conservative and remove the union of data sets that would be rejected using each value of w.

3.2.4. Signal-to-Noise Ratio Cuts

The final cut we apply to our data set is to set a threshold on S/N of each light curve set. The goal of this cut is to exclude stars which provide little sensitivity to occultation events, yet still contribute to the total number of tuples NT, which tightens the selection threshold (see Equation (7)). This is illustrated in Figure 5, which shows the total distribution of S/N values for every light curve set in the data set, as well as the S/N distributions of stars for which simulated events are recovered for 1 km, 8 km, and 30 km objects. It is clear that there is little contribution to our effective sky coverage for S/N ≲ 5. Also shown is the detection efficiency for the same diameter objects versus S/N. As expected, the detection efficiency drops off at low S/N values, especially for the smaller objects.

Figure 5.

Figure 5. Top panel: histogram of S/N values for light curve sets in the entire data set. (For each light curve set, the minimum S/N among all telescopes is used for the histogram.) Lower panels: S/N values of recovered events from our efficiency simulation for 1 km, 8 km, and 30 km objects (solid histogram). Also shown, using the right vertical axis, is the detection efficiency vs. S/N (dotted lines).

Standard image High-resolution image

The exact value of S/N to use as a threshold depends on many factors, especially the set of moving average window sizes that will be used. This optimization is discussed in Section 4.

4. RESULTS

We searched through our data set, using moving average window sizes of 1, 2, 3, and 5 (and every combination thereof) as well as S/N cuts of 0, 1, 2, 3, 4, 5, 6, and 7. For each combination of window size and S/N threshold, the detection threshold zt was calculated as shown in Equation (7). In all cases, no events were found in the data set so the next step in the analysis was to measure our effective sky coverage and optimize the selection of moving average window sizes and S/N cut.

The total number of events expected in the survey is given by

Equation (8)

where dn/dD is the differential size distribution (the number of objects deg−2 km−1 along the ecliptic), and Ωeff is the effective sky coverage of the survey. We estimate Ωeff by implanting a simulated event into each light curve set and seeing if it is recovered by our selection criteria. For this simulation, we choose a set of diameters between 0.5 km and 30 km, and for each diameter, we implant exactly one event into each light curve set. We assume every object is at a geocentric distance of 43 AU, since the light curve shape does not depend critically on the distance for most objects (40–46 AU) in the Kuiper Belt. The tests for each diameter are performed independently, so there is never more than one event implanted into any light curve set at any time. The effective sky coverage is calculated as

Equation (9)

where the sum is over only those light curve sets l where the added event was recovered, El is the length of the light curve set in time, vrel is the relative transverse velocity between the Earth and the KBO, Δ is the geocentric distance to the KBO (again, fixed to a constant value of 43 AU), and H is the event cross section. We estimate H as

Equation (10)

where θ* is the angular size of the target star and F is the Fresnel scale, which is included to account for the minimum cross section of an event due to diffraction (Nihei et al. 2007).

To optimize the selection of window sizes and S/N cut, we looked at the resulting values of Ωeff for every possible combination. There is no clear optimum set of cuts, so we decided to optimize our selection based on three criteria.

  • 1.  
    Minimize the upper limit of the slope q at the small end of the size distribution, assuming a power-law size distribution anchored at the break diameter of 90 km and a cumulative surface density at the break diameter of 5.4 deg−2 (Fraser & Kavelaars 2009; Schlichting et al. 2009).
  • 2.  
    Maximize the sensitivity at D = 1 km, where Schlichting et al. (2009, 2012) claimed the detection of two KBOs.
  • 3.  
    Maximize the sensitivity at D = 30 km in order to bring our upper limit as close as possible to the current direct detection limits (Bernstein et al. 2004; Fuentes et al. 2009; Fraser & Kavelaars 2009).

The best results came from using cuts on S/Ns of 3, 4, and 5. The results are summarized in Table 2. We found that the upper limit on the slope q did not vary significantly as long as we limited ourselves to two window sizes, and included w = 1. The combination of w = 1 and 2 and a minimum of S/N > 3 gave the largest effective sky coverage at 1 km; however, this was particularly bad at 30 km. The best result at 30 km came from w = 1 and 5 with S/N > 3, but this combination was not very good for 1 km. The combinations that worked best for both diameters was using w = 1 and 3 with S/N cuts of 3 and 4. The resulting values of Ωeff for these four combinations are shown in Figure 6. In the end, we opted for using w = 1 and 3 with a threshold of S/N > 3 as this is better at 30 km, and not much worse at 1 km.

Figure 6.

Figure 6. Plots of Ωeff vs. D for the optimum combinations of moving average window sizes and S/N cuts. The values are scaled by ΩB10, which is Ωeff from Bianco et al. (2010), in order to highlight the differences.

Standard image High-resolution image

Table 2. Results from Using Different Window Sizes and S/N Cuts

Window Sizes S/N Cut Tuples zt q Ωeff Ωeff
(D = 1 km) (D = 30 km)
1 and 2 3 9.49 × 109 1.32 × 10−11 3.82 (3.590 ± 0.053) × 10−7 (2.788 ± 0.011) × 10−5
1 and 2 4 7.13 × 109 1.75 × 10−11 3.81 (3.703 ± 0.054) × 10−7 (2.577 ± 0.010) × 10−5
1 and 2 5 5.63 × 109 2.22 × 10−11 3.82 (3.806 ± 0.055) × 10−7 (2.277 ± 0.010) × 10−5
1 and 3 3 9.14 × 109 1.37 × 10−11 3.82 (3.467 ± 0.053) × 10−7 (5.065 ± 0.014) × 10−5
1 and 3 4 6.87 × 109 1.82 × 10−11 3.82 (3.559 ± 0.053) × 10−7 (4.430 ± 0.013) × 10−5
1 and 3 5 5.42 × 109 2.31 × 10−11 3.82 (3.657 ± 0.054) × 10−7 (3.770 ± 0.012) × 10−5
1 and 5 3 8.60 × 109 1.45 × 10−11 3.84 (3.213 ± 0.051) × 10−7 (5.594 ± 0.015) × 10−5
1 and 5 4 6.46 × 109 1.94 × 10−11 3.84 (3.301 ± 0.052) × 10−7 (4.824 ± 0.014) × 10−5
1 and 5 5 5.08 × 109 2.46 × 10−11 3.84 (3.413 ± 0.052) × 10−7 (4.071 ± 0.013) × 10−5

Download table as:  ASCIITypeset image

Given our final set of cuts, the resulting plot of Ωeff versus D is shown in Figure 7. The points on the curve indicate the diameters where we calculated Ωeff using our detection efficiency simulation, and the curve itself is a cubic spline fit to these values.

Figure 7.

Figure 7. Ωeff vs. D using our final selection criteria of w = 1 and 3 and S/N > 3. Also shown for comparison are the previous TAOS results from Bianco et al. (2010; dotted line) and Zhang et al. (2008; dashed line).

Standard image High-resolution image

To set a model-independent upper limit, with no detected events we can eliminate any model which would lead us to expect more than three detected events at the 95% c.l. Our model-independent upper limit is shown in Figure 8 along with results from other occultation surveys and direct searches. We note that our upper limit at D = 1 km is a factor of 4.5 below that reported by Schlichting et al. (2012), but it is also a factor of 6.7 higher than their lower limit based upon their two reported events. Therefore, even though we found no events at 1 km (despite the fact that our efficiency simulation for 1 km diameter objects showed that we are in fact sensitive to events similar to those detected by the Hubble Space Telescope (HST) survey), our limit is thus still consistent with their published result.

Figure 8.

Figure 8. Solid line: 95% c.l. upper limit on cumulative surface density vs. diameter D (bottom axis) and R magnitude (top axis, assuming an albedo of 4% and a distance of 43 AU) from current TAOS data set. Dotted lines: 95% c.l. upper and lower limits on surface density from Schlichting et al. (2012). Cross and error bar (S12): surface density reported by Schlichting et al. (2012). (Note that this is not strictly model independent because it is based on a power-law model. However, the result does not depend very strongly on the assumed slope.) Solid squares (RXTE): upper limit (right point) reported by Liu et al. (2008) and the best-fit surface density (left point) reported by Chang et al. (2007) from an occultation search through RXTE observation of Sco X-1. Solid triangles (PS): upper limits reported by Wang et al. (2010) using Pan-STARRS guider images. Empty triangle (BKW): upper limit reported by Bickerton et al. (2008) using the 1.8 m telescope at DAO. Empty circles (MMT): upper limits reported by Bianco et al. (2009) from analysis of trailed images obtained with the MMT. Upside down empty triangle (R06): upper limit reported by Roques et al. (2006) from high-speed imaging using the 4.2 m Herschel Telescope. Empty square (HST): upper limit reported by Bernstein et al. (2004) from a direct survey using the HST. Solid circle (FGH): surface density reported by Fuentes et al. (2009) from a direct search using Subaru. Star (FK): surface density reported by Fraser & Kavelaars (2009) from a direct survey using Subaru. (Note that Fraser & Kavelaars (2009) assume an albedo of 6% and a distance of 35 AU, so they claim that the point plotted at R = 27.875 corresponds to a diameter D = 15 km.) Lines and box labeled CB, P, and SD: required surface density for the Classical Belt (Levison & Duncan 1997), Plutinos (Morbidelli 1997), and Scattered Disk (Volk & Malhotra 2008), respectively, to be the source for the observed distribution of Jupiter family comets.

Standard image High-resolution image

We also calculate upper limits on the small KBO size distribution assuming it follows a power law anchored at the detected break at D = 90 km. We use Equation (8) to calculate the number of expected events as a function of slope q. The results are shown in Figure 9. We use two values for the surface density at D = 90 km, which normalize the size distribution for smaller diameters. Using the value reported by Fraser & Kavelaars (2009) of ND > 90 km = 5.4 deg−2 yields a 95% c.l. upper limit of q = 3.82. In comparison, Schlichting et al. (2009) report a slope of q = 3.8 ± 0.2 based on their claim of two detections at D = 1 km, assuming the inclination distribution of small KBOs follows that of the larger (D > 100 km) objects (Elliot et al. 2005). The second value we use is ND > 90 km = 38.0 deg−2 as reported by Fuentes et al. (2009). This results in a 95% c.l. upper limit on q of 3.34.

Figure 9.

Figure 9. Number of expected events vs. slope q. Solid line: number of expected events using a surface density of ND > 90 km = 5.4 deg−2, as reported by Fraser & Kavelaars (2009). Dashed line: number of expected events vs. slope q, using a surface density of ND > 90 km = 38.0 deg−2, as reported by Fuentes et al. (2009). Since no events were found, any model predicting more than three events (dotted line) is excluded at the 95% c.l.

Standard image High-resolution image

5. FUTURE PLANS

After seven years of observations, it would only be of marginal value to continue the survey in its present form so at the end of the data set discussed in this paper, we shut down the system for a camera upgrade. Each new camera, manufactured by Spectral Instruments, utilizes a CCD47-20 frame transfer CCD from e2v. With the new cameras, we can image with a readout cadence of just under 10 Hz if we use 2×2 binning. The CCDs are 1k×1k with 13 μm pixels, as opposed to the 2k×2k imagers with 13.5 μm pixels used to collect the current data set, so 77% of the field of view is lost. However, by moving away from zipper mode to full-frame imaging, our S/N increases significantly and we estimate that our limiting magnitude will increase from R = 13.5 to 15. We thus expect to be able to monitor a similar number of stars. Given the higher readout cadence, we become significantly more sensitive to objects with D ≲ 1 km. Observations with these new cameras began in 2012 November.

Work at ASIAA was supported in part by the thematic research program AS-88-TP-A02. Work at NCU and at Lulin Observatory was supported in part by grant NSC101-2628-M-008-002. Work at the CfA was supported in part by the NSF under grant AST-0501681 and by NASA under grant NNG04G113G. Work at Yonsei was supported by the NRF grant 2011-0030875. Work at NCU was supported by the grant NSC 96-2112-M-008-024-MY3. Work at Berkeley was supported in part by NSF grant DMS-0636667. Work at SLAC was performed under USDOE contract DE-AC02-76SF00515. Work at NASA Ames was supported by NASA's Planetary Geology and Geophysics Program.

Footnotes

  • 17 

    This is the same algorithm used in SExtractor (Bertin & Arnouts 1996) to detect objects.

Please wait… references are loading.
10.1088/0004-6256/146/1/14