This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.

The Automated Palomar 60 Inch Telescope

, , , , , , , , , , , , , , , and

Published 2006 October 23 © 2006. The Astronomical Society of the Pacific. All rights reserved. Printed in U.S.A.
, , Citation S. Bradley Cenko et al 2006 PASP 118 1396 DOI 10.1086/508366

1538-3873/118/848/1396

ABSTRACT

We have converted the Palomar 60 inch (1.52 m) telescope from a classic night‐assistant‐operated telescope to a fully robotic facility. The automated system, which has been operational since 2004 September, is designed for moderately fast (t≲3 minutes) and sustained (R≲23 mag) observations of gamma‐ray burst afterglows and other transient events. Routine queue‐scheduled observations can be interrupted in response to electronic notification of transient events. An automated pipeline reduces data in real time, which is then stored on a searchable Web‐based archive for ease of distribution. We describe here the design requirements, hardware and software upgrades, and lessons learned from roboticization. We present an overview of the current system performance as well as plans for future upgrades.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

The field of optical transient astronomy has matured to produce numerous important scientific discoveries in recent years. Type Ia supernovae (SNe) have been used as standard candles to produce Hubble diagrams out to z∼0.5, providing evidence that the expansion of the universe is accelerating (Riess et al. 1998; Perlmutter et al. 1999). Observations of the broadband afterglows of long‐duration (t>2 s) gamma‐ray bursts (GRBs) have revealed an association with the deaths of supermassive stars (Galama et al. 1998; Stanek et al. 2003; Hjorth et al. 2003). The discovery of the first afterglows and host galaxies of short‐duration (t<2 s) GRBs (Gehrels et al. 2005; Bloom et al. 2006b; Hjorth et al. 2005; Fox et al. 2005b) has possibly revealed a new class of GRB progenitors: compact binary coalescence (Eichler et al. 1989).

As interest in the field has steadily grown, new, more powerful methods of identifying optical transients have been developed. The Swift Gamma‐Ray Burst Explorer (Gehrels et al. 2004) is currently providing ∼100 prompt GRB localizations per year, an order‐of‐magnitude improvement over previous missions. Planned wide‐angle, high‐cadence surveys with large facilities, such as Pan‐STARRS (Panoramic Survey Telescope and Rapid Response System; Kaiser et al. 2002) and LSST (Large Synoptic Survey Telescope; Tyson 2005), promise to overwhelm our current follow‐up capability, providing hundreds of variable optical sources each night.

Dedicated robotic, medium‐aperture (1–3 m) telescopes have the opportunity over the next few years to play a crucial role in this field. Like small‐aperture (<0.5 m) robotic facilities, they can respond autonomously to transient alerts, providing observations at early times. And given the relative abundance of such telescopes, it is entirely feasible to focus predominantly on transient astronomy. However, as is the case with larger telescopes (>5 m), interesting events can be followed for longer durations and in multiple colors. In this sense, robotic, medium‐aperture facilities can act to bridge the gap between the earliest rapid‐response observations and deep, late‐time imaging and spectroscopy.

To this end, we have roboticized the Palomar 60 inch telescope (P60). As a dedicated, robotic facility, the P60 is capable of responding moderately fast (t≲3 minutes) to transient alerts. With the increased event rate of Swift, the P60 is providing observations of the poorly understood early afterglow phase (Fig. 1). In addition, as a 1.5 m telescope, the P60 can continue the sequence of observations longer than most robotic telescopes. As Figure 2 shows, one day after the burst, most afterglows have faded below R = 20; however, for days or even weeks after that, they remain at levels of R<23, accessible to P60 photometry.

Fig. 1.—

Fig. 1.— Early afterglows of pre‐Swift GRBs and P60 response capabilities. Regions with a white background are accessible for automated P60 observations: t≳3 minutes, R≲23 mag. With only a handful of examples, the early optical afterglows of pre‐Swift GRBs show a marked diversity. GRB 990123 (Akerlof et al. 1999) and GRB 021211 (Fox et al. 2003a; Li et al. 2003) exhibit the fast, t-2, early‐time decay indicative of adiabatic evolution of the reverse shock. On the other hand, GRB 021004 (Fox et al. 2003b; Holland et al. 2003; Pandey et al. 2003) shows a distinctive slow, t-0.4, decay that likely signifies continuing energy input to shock regions. Reverse shock emission from GRB 030418 (Rykoff et al. 2004) was not seen; the optical peak at t = 0.4 hr is due to the forward shock component. As a proof of concept, the P60 was the first to report the afterglow of GRB 040924 (Fox & Moon 2004; Li et al. 2004; Hu et al. 2004; Silvey et al. 2004; Khamitov et al. 2004). The early‐time behavior is quite similar to that of GRB 021211.

Fig. 2.—

Fig. 2.— Late‐time light curves of pre‐Swift GRB afterglows. The gray‐shaded region displays the phase space inaccessible to automated P60 observations. Observations of most afterglows require >1 m class facilities after the first night; investigation of optically extinguished ("dark") or high‐redshift bursts require such facilities merely to register detections or collect physically interesting upper limits.

In this work, we first outline the high‐level design requirements of a robotic system optimized for observations of transient sources (§ 2). Section 3 provides the details of the automation procedure, including both the hardware and the software efforts. Section 4 describes the current system performance (as of 2006 May), which will primarily be of use for those interested in observing with the P60. Finally, in § 5, we conclude with a summary of the project status and a discussion of possible future improvements to the robotic system.

2. GENERAL DESIGN CONSIDERATIONS

Designing a robotic system for transient astronomy presents a unique set of challenges from both a hardware and a software perspective. It is necessary to create an intelligent system that can reliably handle the roles usually provided by the observer and night‐assistant at a standard facility (see, e.g., Genet & Hayes 1989).

Given our scientific objectives, we identified the following system requirements for the Palomar 60 inch automation project:

  • 1.  
    Automated transient response in ≲3 minutes.—GRB afterglows are predicted to decay in time as a power law (Fν∝t) with index α ≈ 1–2, depending on whether the emission is dominated by the forward shock (αFS ≈ 1; Sari et al. 1998) or reverse shock (αRS ≈ 2; Sari & Piran 1999). For (optically) bright bursts, rapid response enables studies of the afterglow at its brightest, shedding light on the poorly understood early afterglow phase (Fig. 1). For the fainter bursts, rapid response is required simply to obtain a detection or even a meaningful upper limit (Fig. 2). Our desired response overhead is limited primarily by the telescope slew time.
  • 2.  
    CCD readout in <30 s.—Given the expected power‐law behavior, densely sampled observations are necessary to accurately characterize the early afterglow decay. And since our current system is not equipped with an automated guider, deep observations must be broken down into many individual exposures (and hence many accompanying readouts). Given typical values for our telescope slew time (3 minutes) and exposure time (1–3 minutes), we determined that a readout time <30 s would not significantly affect our sampling rate or efficiency.
  • 3.  
    Photometry from the near‐ultraviolet to the near‐infrared.—GRB redshifts can be estimated photometrically by modeling afterglow spectral energy distributions (SEDs). Lyα absorption in the intergalactic medium (IGM) causes a steep cutoff in the SED, the location of which indicates the afterglow redshift (Lamb & Reichart 2000). To constrain as large a spectral range as possible (2<z<6), we require coverage over the entire optical bandpass (see Fig. 3). The ideal solution would be a multiband camera, providing simultaneous imaging in multiple filters. The cost of either purchasing or building such an instrument, however, was too high for our first generation of operations. Instead, we employ a 12‐position filter wheel, with coverage spanning from Johnson U band (λc = 3652 Å) to Sloan z' band (λc = 9222 Å).
  • 4.  
    Intelligent observation oversight.—Like a virtual night assistant, a centralized source of information is required to effectively manage nightly observations (i.e., telescope, weather, and instrument status information). Under ideal conditions, this is not a difficult task. More challenging, however, is implementing a robust capability to intelligently respond to adverse conditions.
  • 5.  
    Queue‐scheduling system for standard mode.—Since not all of the telescope time is devoted to rapid‐response GRB observations, a scheduler is needed to handle standard scientific observations, as well as calibration images. We chose to implement a queue‐scheduler, as it is capable of providing real‐time management of observations (i.e., targets can be submitted to the queue at any time) with a minimal amount of daily oversight (night‐to‐night memory ensures that there is no need to write daily target lists). Furthermore, a queue scheduler is ideally suited for long‐term monitoring of transient objects; SNe and GRBs can be left in the queue for regular monitoring on timescales of weeks or even months.
  • 6.  
    Automated, real‐time (<2 minutes) data reduction.—Real‐time data reduction is necessary for several reasons. First and foremost, feedback is required for standard system oversight commonly performed by observers present at the telescope. Focusing is the simplest example. Second, rapid identification of optical counterparts is critical for intelligent follow‐up observations. High‐resolution absorption spectroscopy in particular requires a rapid turnaround with large facilities. Finally, properly handling the large amounts of data produced on a nightly basis requires that data reduction be fully automated.
  • 7.  
    Fully searchable, Web‐based data archive.—The average P60 data rate, including daily calibration files, is ∼5 gigabytes per night. Furthermore, with our queue‐scheduling system, science images are obtained for a large number of users (∼10) on most nights. We therefore opted for a high‐capacity, fully searchable data archive for ease of data storage and distribution.
Fig. 3.—

Fig. 3.— Optical and near‐infrared SEDs of GRB afterglows as a function of redshift. These SEDs are models of the afterglow of GRB 990510 1 hr after the burst (Panaitescu & Kumar 2001), viewed at redshifts ranging from z = 1 to 10. The P60 R‐band sensitivity (1 hour integration, R ≈ 23 mag) is shown as a dashed line, extended to all frequencies for reference. The central wavelengths of the broadband filters on the P60 are drawn above the spectra, as well as the standard JHKs near‐infrared filter set. Lyα absorption in the IGM causes the steep cutoff in the afterglow spectra, which can be used to estimate the redshift of GRB afterglows photometrically (Lamb & Reichart 2000).

3. AUTOMATION PROCEDURE

In § 2 we outlined the design requirements for the automated system. Here we describe the techniques we have used to meet these requirements in a more thorough manner.

3.1. New CCD and Electronics

The previous P60 CCD took almost 3 minutes to read out, unacceptably long given our desired response time of ≲3 minutes. Furthermore, the camera was only accessible via a local MicroVAX terminal, making automated observations impossible. To meet our design requirements, we chose to build a new camera using the latest San Diego State University controller Generation III electronics (SDSU‐III; Leach & Low 2000). This system is capable of better performance than an off‐the‐shelf product, with the trade‐off being that a significant time investment was required for development and testing. In the following two sections, we describe the new electronics (§ 3.1.1) and the software used to control the camera (ArcVIEW; § 3.1.2).

3.1.1. SDSU‐III Electronics

The telescope was equipped with a new SITe 2K × 2K back‐illuminated CCD. While we have not measured the quantum efficiency of the new device, our observations indicate that its quantum efficiency is comparable to that of the previous camera (which was an identical SITe 2k × 2k CCD). For reference, we include a quantum efficiency plot from the old CCD in Figure 4.

Fig. 4.—

Fig. 4.— Previous P60 CCD quantum efficiency. While we have not measured the quantum efficiency of the new P60 CCD, it is identical in design to the previous version shown here. Comparing observations made with both detectors indicates a comparable overall performance.

The new CCD is controlled by an SDSU‐III controller (Leach & Low 2000). The new controller contains a faster optical link than the Generation II system, as well as a newly designed timing board. The system is capable of reading out four channels in parallel. However, to reduce costs and simplify fabrication, we currently utilize only two amplifiers for readout.

Temperature sensors were placed in thermal contact with the CCD and the dewar neck and can, as well as on board the electronics. These sensors are capable of triggering an alarm under abnormal conditions, for example, when the dewar runs out of liquid nitrogen and begins to warm.

In addition to the standard full‐frame readout mode, two additional capabilities have been implemented. Using the region‐of‐interest (ROI) functionality, we can read out only a subsection of the chip. This is particularly important for small GRB error circles, helping to improve both the sampling rate and efficiency of our system. In addition, the ability to manipulate charge independent of the readout ("parallel shift") greatly decreases the time required for a focus loop. This has been of utmost importance, given the difficulties we have encountered maintaining system focus throughout the night (see § 4.3).

The relevant characteristics of the new camera are outlined in Tables 1 and 2. The P60 camera was the first developed under an engineering scheme designed to standardize enclosures and cabling for new instruments on the mountain. The lessons learned have been extended to future instruments being developed for Palomar Observatory.

3.1.2. Instrument Control System: ArcVIEW

The software used to control instrument operation is called ArcVIEW, a package that was developed at the Cerro Tololo Inter‐American Observatory and Caltech. It is based on LabVIEW (interfaces and communication) and C (real‐time data processing and driver API [application programming interface]).

The ArcVIEW architecture consists of a set of software modules that can be loaded or unloaded dynamically to control different processes. The core of the software receives commands and passes them to the appropriate module for processing. A translation layer built into the system allows for transparent hardware control (i.e., the standard command set available to the user is independent of the details of the hardware being controlled).

ArcVIEW commands are sent as plain ASCII strings passed through raw sockets. Graphical user interfaces (GUIs) are not needed to control the system; however, some of them are provided in order to handle data taking, filter movements, telescope control system (TCS) commands, and low‐level engineering commands in a user‐friendly way.

Besides the normal command/response channel, ArcVIEW contains an optional asynchronous message channel, which allows the system to send asynchronous alarm messages (temperatures, power supplies, etc.), callbacks, or event messages to the connected client. Using this extra channel makes it possible to perform simultaneous actions (e.g., moving the telescope while reading out the array).

The final output of the system is an image (or sequence of images) written in FITS format and containing user‐defined header information. The two P60 amplifiers are read out and stored as a multiextension FITS file.

We have chosen a modular design for our major software components, as illustrated in Figure 5. Each component acts independently, with a well‐defined communication protocol between the different modules. This makes software upgrades easier, allows for a clean division of labor and responsibilities, and guarantees a more robust system, as failure in one component does not necessarily imply complete system failure. Modular designs have long been in use at automated facilities and have proved both reliable and effective (see, e.g., Honeycutt & Turner 1992; Steele & Carter 1997; Granzer et al. 2001; Bloom et al. 2006a). On the P60, ArcVIEW acts as a single point of contact between hardware operation (telescope, CCD, and filter wheel) and all other system components (see Fig. 5).

Fig. 5.—

Fig. 5.— P60 software overview. Arrows indicate direct channels of communication. The modular design was chosen to ensure both stability and ease of upgrade/repair.

3.2. Observatory Control System

The purpose of the observatory control system (OCS) is to provide intelligent oversight of nightly observations and to coordinate information from all system components (Fig. 5). We identify four primary tasks for which the OCS is responsible, each discussed below.

First, at the beginning of each night, the OCS spawns the queue‐scheduling software in a separate process (see § 3.3). These two systems communicate throughout the night via a socket, as real‐time target selection depends on the success of previous observations.

After receiving an observation request, the OCS is then responsible for executing it in a safe and efficient manner. Communication with the TCS, via the transparent ArcVIEW intermediary, ensures that external conditions permit the requested observation. All component tasks that can be completed in parallel (e.g., moving the telescope and filter wheel) are done so to improve system efficiency. An observation is considered to have completed successfully when the readout of the final exposure begins.

Third, after the successful completion of the first images on any given night, the OCS spawns the data reduction pipeline in a separate process (see § 3.4). These two systems communicate to ensure the integrity of science images, most notably by maintaining telescope focus throughout the night (see § 4.3).

Finally, the OCS handles any errors that arise during the normal course of operations. Each error condition is assigned a level in a hierarchy of functionality. Lower levels correspond to more basic, elementary functionality, and higher levels correspond to the opposite. When an error is discovered, the OCS will begin at the appropriate error level and work downward until the depth of the error condition is determined. The OCS then works to restore the system to functionality. If no solution can be found, the system goes into a safe mode, closing the dome and terminating observations. E‐mail notices and text messages are sent in order to alert users of this condition.

As an example, we consider an error generated by the focus encoder during routine operation. The OCS first verifies communication with the TCS. If this fails and cannot be restored, the system checks communication with ArcVIEW, as it is responsible for routing most communication. If this too fails and cannot be restarted, the OCS checks for Internet connectivity. This process continues until either a solution is discovered or human intervention is required. Similar systems have been used successfully on other automated facilities (Honeycutt & Turner 1992; Granzer et al. 2001).

3.3. Observation Scheduling System

In the design of the observation schedule system (OSS), we have deliberately pursued a "shortsighted" strategy of selecting targets in real time. That is, observations are chosen at each point in the night when the OCS reports being in a ready state, rather than attempting to optimize a sequence of observations over the course of a full night (or over multiple nights). This strategy is relatively well suited to ground‐based observations for which future observing conditions are unknown and observing overheads are a relatively minor concern. Moreover, the scheduling protocol and target list for P60 observations are modest enough that a full evaluation of the target list can be performed in a matter of seconds. This principle of "just in time" scheduling has also been pursued at several larger scale queue‐observing facilities (Chavan et al. 1998; Sasaki et al. 2000; Adamson et al. 2004), as well as at more modest robotic observatories (Honeycutt et al. 1990; Fraser & Steele 2004).

Target scores are determined on the basis of raw target priorities, which are fixed in advance, combined with the application of several parametric weightings. The most important of these for scheduling purposes are the Airmass and Night weighting variables, which take as input the current air mass of the target and the number of hours left before the target becomes unobservable (due to target‐set or morning twilight), respectively.

The nature of the effect of each weighting is the same. Based on the value of the input variable, the weight is calculated and applied as a multiplier to the target score (initially, the target priority). If the weighting is found to be zero, then the target score is necessarily zero; otherwise, the target score will be increased or decreased depending on whether the weight in question is calculated to be greater or less than 1.

The full list of possible weighting variables includes:

  • 1.  
    Airmass.—With input variable, the current air mass of the target. This weighting prefers sources that are close to transit (minimum air mass).
  • 2.  
    Night.—With input variable, the number of hours until the source becomes unobservable. This weighting helps ensure efficiency of the scheduler operations, since it prefers sources that are setting rather than rising. The estimated duration of the target's full exposure sequence is included in the calculation.
  • 3.  
    Moondeg.—With input variable, 180° minus the current angular distance from the target to the moon. This avoids taking images with high sky background due to moonlight.
  • 4.  
    Seeing.—With input variable, the current seeing in arcseconds. This allows the segregation of programs according to whether their science is adversely affected by poor seeing.
  • 5.  
    Extinction.—With input variable, the current magnitude of extinction, in the R band, due to clouds. This allows segregation of programs according to how strongly they are affected by reduced sensitivity.

The Seeing and Extinction weightings are not yet in operation but should be applied dynamically within the OSS by the end of summer 2006.

In addition to these parametric weightings, target scores are also adjusted based on timing criteria. The default logarithmic timing scheme steadily increases the score of a target from night to night until it has been observed. Alternate timing schemes allow for periodic (ephemeris‐based) or regular aperiodic ("best effort") monitoring of targets, or for target activation within a specified window of time only.

Finally, we have found it important to increase the score of targets once they have been observed on a given night, so that they are more likely to be observed to completion (one or more sets of the requested exposure sequence) during that night. This prevents fragmentation of observer programs and reduces overheads that are mostly incurred on a per target basis.

3.4. Image Analysis Pipeline

The constituent routines for our image analysis pipeline are composed within the context of PyRAF,9 a Python wrapper for the IRAF data reduction environment of the NOAO.10 The pipeline is instantiated in a single Python script that can be run from the Linux command line. The script runs continuously throughout the night, identifying new raw images as they are copied into the target directory and processing them in real time.

PyRAF allows access to IRAF routines from within Python, a scriptable, object‐oriented, high‐level language environment. In particular, Python performs active memory management and, with its various included modules, supports mathematical and logical operations on array variables, regular‐expression matching against text strings, and easy access to FITS headers and data.

Python scripts that access arbitrary PyRAF routines can be executed from the command line. The speed of these scripts is not as fast as compiled C routines. However, the single most substantial overhead for script execution is incurred at start‐up as the PyRAF libraries (including IRAF) are loaded into memory. Once cached in memory, the speed of execution of our scripts is competitive with native IRAF and is adequate to our purposes.

The routines of the P60 pipeline execute the following reduction steps in sequence: (1) demosaicking, which performs overscan subtraction on the separate image extensions produced by the two amplifiers and combines them into a monolithic image while preserving the values of unique header keywords associated with each extension; (2) bias subtraction against our nightly bias image; (3) flat fielding against the dome‐flat images taken during the afternoon or previous morning, plus sky subtraction and the addition of the dead‐reckoning world coordinate system (WCS); (4) masking of bad pixels, using the nightly bad pixel mask; (5) object detection using a spawned SExtractor11 process; (6) WCS refinement via triangle‐matching against the USNO B‐1.0 catalog,12 using the ASCFIT software (Jørgensen et al. 2002); and (7) seeing and zero‐point estimation using USNO B‐1.0 catalog stars identified in the image.

If an insufficient number of stars are identified during the WCS refinement process for an image, then the dead‐reckoning WCS is left untouched and the seeing and zero‐point estimation steps are skipped. Calibration products are produced from raw calibration bias and dome‐flat images at the start of the night as a separate process.

The final analysis task, which is performed by a special single‐purpose script, is to determine our best‐focus value and current seeing from a single‐focus run (multiple exposures and a single readout) on a bright star. For the sake of speed, this task omits most of the standard processing steps.

Additional routines have been coded but are not run in an automated fashion, either because of difficulty in robustly defining their operations or because of excessive processing requirements. These include fringe image creation and defringing of I‐ and z'‐band images, co‐addition of multiple dithered images to achieve greater depth of field, and mosaic co‐addition of multiple images, using SWarp,13 to cover significantly larger areas than the CCD field of view.

The P60 pipeline routines are general and can be readily applied to other data reduction tasks; indeed, we have already adapted them to the construction of an interactive pipeline for the Wide Field Infrared Camera (WIRC; Wilson et al. 2003) data reduction at the Hale 200 inch (5.08 m) telescope.

3.5. Data Archive

The P60 data archive is designed to securely store data collected at the robotic facility and to provide efficient and convenient access to users from the P60 partner institutions. In return for a 10% share of telescope time, the Infrared Processing and Analysis Center (IPAC) has assumed responsibility for the procurement, installation, and maintenance of the archive hardware, as well as for database software development, following specifications provided by the P60 science team at Caltech.

The archive routinely stores the entire set of raw frames, calibration data, and pipeline‐processed images collected nightly at the telescope. The data are transmitted down from Palomar Mountain to the Caltech campus over the new HPWREN fast data link. The images are transmitted in a nonlossy compressed form, and MD5 checksums are used to verify their integrity. At IPAC, all files are stored on a cluster of Sun Microsystems computers hosting the archive server and database structure. A RAID5 Nexsan ATAboy disk farm provides approximately 3 TB of disk space. A second copy of the data is kept on Caltech computers at Robinson Laboratory as backup. Each nightly batch of data is ingested into the database software, which has an astronomy‐optimized architecture similar to other IRSA archives. User access is provided through a Web‐based interface. Using the archive Web page, users can query the database, locate data they require, and request them from the archive. Data delivery is from a staging area, following e‐mail notification to the user. Under normal operating conditions, small data packets can be obtained in this way within minutes.

4. AUTOMATED SYSTEM PERFORMANCE

The P60 has been running in a fully automated mode since 2004 September. This includes all aspects of operation, from the automated queue scheduler through nightly ingestion of archival data. Here we present an overview of the current system performance, focusing primarily on information relevant for interested P60 observers.

4.1. CCD Camera, Telescope, and Filters

As of 2006 June, the camera was performing reliably and had met all relevant specifications. Since the fall of 2004, the amount of time lost due to detector or electronics problems (or related software) is small (<5%). A summary of the relevant camera details can be found in Tables 1 and 2.

The most relevant characteristic for our science goals is the readout time. The full‐frame readout time of the system is 24 s. This can be significantly reduced, however, by using the region‐of‐interest mode (§ 3.1.1). For instance, a 6' × 6' field (1/4 of the chip) requires only 10 s to read out.

We have found that amplifier 1 (the "bottom" amplifier) has a significantly lower read noise than amplifier 2 (the "top" amplifier; 5.3 vs. 7.8 e). The top region of the CCD is also cosmetically less pleasing than the bottom region, as several adjacent bright columns run through the center portion of the CCD (see Fig. 6). We therefore recommend applying a small offset from the central location (+3' R.A., −3' decl.) for nonextended sources. We have added an optional offset parameter to our target specification protocol in order to make this change easier for users.

The pointing accuracy of the system is more than sufficient for our needs, with typical rms values of 15''. However, we have found somewhat deviant behavior (up to 45'' offsets) for targets observed at large air mass (>3). We believe this is caused by different pointing behavior with the eyepiece mounted (used for rapid manual calculation of the pointing model) than with the CCD camera mounted (nightly observations). We are currently investigating this issue in more depth. However, we note that given our large field of view, even pointing errors as large as 1' are unlikely to cause significant problems.

Our typical filter wheel configuration consists of a set of standard broadband filters: Johnson UBV (Bessell 1990 and references therein), Kron RI (functionally similar to Cousins RCIC; Bessell 1990), Sloan i' z' (Fukugita et al. 1996), and Gunn g (Thuan & Gunn 1976); two variations on Sloan z': zshort and zlong; and two narrow‐band Hα filters (λc/Δλ = 6564/100 and 6584.65/17.5). We have found significant deviations from the canonical transmission curves for some of our broadband filters. We therefore measured the transmission curves of all of our broadband filters, and the results are shown in Figure 7. These measurements are also available in tabular form online.14

4.2. Observatory Conditions

Observing conditions at Palomar are highly seasonally dependent. In the summer months, it is rare to lose an entire night due to weather. The average seeing at the P60 in the summer is ∼1 farcs1 in R band. However, the winter months are much worse. As an extreme example, the P60 was closed for 15 full nights in 2005 January. Average seeing degrades to ∼1 farcs6 and can at times be significantly worse. The seeing we experience at the P60 is oftentimes slightly worse (by ∼0 farcs2) than the values reported at the Hale 200 inch telescope. We attribute this primarily to the difficultly we have encountered determining and maintaining an accurate focus value (see § 4.3).

Sky background levels are generally good at Palomar, although they have increased somewhat over the last decade as the area has become more populated. In recent images at P60 with the new CCD, we have found sky background levels of 19.9, 19.0, 18.8, and 17.7 mag per point‐spread function (PSF; here approximated as a circular aperture of 1 farcs5 diameter) in B, V, R, and I, respectively. The 3 σ limiting magnitudes of our current system are 20.5 mag in B, V, and R, and 19.8 mag in I band for an isolated point source in a 1 minute exposure. These results are summarized in Table 3.

The shortest recommended exposure time is set by the shutter mechanism. For exposures shorter than 2 s, the shutter speed becomes important, and the true opening time (measured from a flat‐field linearity curve) is not strictly repeatable. The longest recommend exposure is limited by the fact that we are not using a guider to assist in telescope tracking. This value is therefore dependent on external conditions. In standard seeing of 1 farcs5, exposure lengths longer than 180 s begin to show image degradation. Under good seeing conditions of 1 farcs0, we have noticed degradation in images longer than 90 s. Users requiring deep images of a field will need to split up their observations into exposures of this length and thereby sacrifice readout overhead.

4.3. Observatory Efficiency

The P60 currently devotes on average ≈50% of the time the dome is open for observations to science exposures. This value is quite variable, however, depending primarily on the number of different fields observed each night. An overview of the typical nightly efficiency is presented in Table 4. Please note that the values presented are given in terms of the total time the dome is open, not the total available dark time. Additional factors such as weather can significantly affect the overall efficiency.

Besides required operations such as telescope slews, the primary constraint on our system efficiency comes from focusing. We have found the secondary mirror on the telescope to be unstable, particularly at higher elevations. Large telescope slews unpredictably alter the secondary mirror position, thereby taking the telescope out of focus. While engineering work to reinforce the structural support of the secondary in the spring of 2006 has improved stability, we still conduct a focus loop every time we slew to a new target to maintain focus (this loop is disabled for rapid‐response observations). As each individual focus loop takes ≈3 minutes, visiting a large number of fields each night can have a significant impact on our system efficiency.

In addition, our relative efficiency is lowered by ≲5%, because the P60 is not equipped with a guider. As mentioned in § 4.2, this puts an upper limit on suggested exposure times. In many cases, we must use shorter exposures than would otherwise be optimal, in order to minimize the fraction of time spent in CCD readout. We note, however, that real‐time scheduling has no noticeable impact on efficiency, as the OSS spends less than 1% of the available time each night calculating which target to observe next.

4.4. Transient Response Time

The telescope response time to transient notices currently varies from 2 to 6 minutes. Our fastest response time was for GRB 050906, for which we began observations 101 s after receiving the trigger notice (114 s after the GRB; Fox et al. 2005a). Under the current system, observations of transient events do not begin until the previous observation has successfully completed. Although most exposures are relatively short, this could take up to 5 minutes and explains why we have not met our stated response time goal in all cases. We are currently in the process of implementing an instantaneous interrupt capability, and aim to improve the response time to <3 minutes by the end of summer 2006.

5. CONCLUSIONS

In this paper, we have presented our efforts to automate and roboticize the Palomar 60 inch telescope. As of 2004 September, all components of the system operate in a fully automated fashion, making P60 one of the few robotic, medium‐aperture facilities in the world. The P60 has been routinely responding to Swift GRB alerts over the last year and a half, and will continue to do so over the lifetime of the Swift mission. The system is well positioned for the plethora of optical transients that will be discovered in the upcoming years.

In addition to the current optical camera, we are planning several major upgrades to further improve the scientific capabilities of the system. In the near‐term, our top priority is to add a near‐infrared (NIR) camera to P60. We have already acquired the NIR detector from the out‐of‐use Cerro Tololo Infrared Imager (CIRIM15) and have upgraded the controller electronics. We are currently working on both optical design and software development, with the hope of having both cameras mounted and functional in the next year. We also plan to make the P60 fully compliant with the Virtual Observatory Event Network (VOEventNet16) protocol. In this manner the system can communicate with other observatories around the world without any human intervention.

As longer term projects, we are exploring the possibility of adding either a polarimeter or a multiband camera to the facility. Regardless of the details, we are committed to making the P60 a scientifically productive facility in the years to come.

We would like to thank the entire staff at Palomar Observatory, without whose patience and hard work this project would not have been possible. S. B. C. and A. M. S. are supported by the NASA Graduate Student Research Program. A. G. acknowledges support by NASA through Hubble Fellowship grant HST‐HF‐01158.01, awarded by STScI. GRB research at Caltech is supported through NASA and the NSF.

Footnotes

Please wait… references are loading.
10.1086/508366