OBTAINING WEATHER DATAFOR INPUT TO CROP DISEASE-WARNING SYSTEMS: LEAF WETNESS DURATION AS A CASE STUDY

Disease-warning systems are decision support tools designed to help growers determine when to apply control measures to suppress crop diseases. Weather data are nearly ubiquitous inputs to warning systems. This contribution reviews ways in which weather data are gathered for use as inputs to disease-warning systems, and the associated logistical challenges. Grower-operated weather monitoring is contrasted with obtaining data from networks of weather stations, and the advantages and disadvantages of measuring vs. estimating weather data are discussed. Special emphasis is given to leaf wetness duration (LWD), not only because LWD data are inputs to many disease-warning systems but also because accurate data are uniquely challenging to obtain. It is concluded that there is no single “best” method to acquire weather data for use in disease-warning systems; instead, local, regional, and national circumstances are likely to influence which strategy is most successful.


INTRODUCTION
Disease-warning systems, also known as disease forecasters, are decision support tools that help growers to assess the risk of outbreaks of economically damaging crop diseases. Using information about weather, crop, and/or pathogen, warning systems advise growers or other crop managers when they need to take an action -usually to apply a fungicide or bactericide spray -to prevent disease outbreaks and avoid economic losses.
Disease-warning systems are key elements of Integrated Pest Management (IPM) efforts to reduce excessive use of chemical pesticides. There are several potential incentives for growers to adopt warning systems. By substituting risk-assessment-based spray timing for traditional calendar-based pesticide spraying, growers can reduce spray frequency, limiting the health and environmental hazards of pesticide use while presenting an environmentally friendly image to customers. In some instances, implementation of diseasewarning systems may also enhance profitability by reducing input costs (Funt et al., 1990;Gleason et al., 1994).
In the past 40 years, scores of disease-warning systems have been developed and validated for dozens of crops. A few are in wide use by growers, and represent encouraging success stories in IPM (Gleason et al., 1995). Most warning systems, however, have not made the transition from scientific validation to realworld application (Magarey et al., 2002).
Why are so many of these tools languishing in the toolbox? This implementation shortfall is not limited to disease-warning systems, but also characterizes most other types of agricultural decision support systems (DSSs), whose rate of grower adoption is widely perceived as disappointing (Matthews et al., 2008;McCown et al., 2002). One reason for this lack of implementation is failure of system developers to adequately involve growers and other end users in development and testing of the systems (McCown, 2002). The result of this disconnection between system developers and system users can be failure of the systems to meet growers' needs.
One way to make DSSs more user-friendly is to streamline the logistics of using them. Campbell & Madden (1990) describe several logistical barriers to grower adoption of disease-warning systems -inconvenience, added cost and labor, and difficulty in responding to advisories in a timely way -that are often encountered in the process of obtaining and handling weather data inputs. Grower adoption of warning systems is unlikely to increase unless these barriers come down.
This article reviews ways by which weather data are obtained for input to warning systems, touching on logistical advantages and drawbacks of each approach. Leaf wetness duration (LWD), defined as the period of time that free water is present on plant surfaces, is highlighted because it illustrates many of these points, and because of its importance in governing the activity of many foliar pathogens.

Measurements, estimates, and errors
Weather parameters that are commonly used as inputs to disease-warning systems include air temperature, rainfall, relative humidity (RH), and LWD. Additional variables, such as wind speed, wind direction, and solar radiation, are inputs to only a few warning systems.
Measurements made at weather stations are the foundation for all types of weather inputs to warning systems. These data are sometimes measured on individual farms, either within or near a crop canopy of interest. Alternatively, data measured at regional networks of weather stations may be localized to a particular farm using Global Positioning Systems (GPS), Geographic Information Systems (GIS) methods, and meteorological models (Royer et al., 1989;Magarey et al., 2001;Thomas et al., 2002).

Do-it-yourself weather monitoring
Some growers or farm managers obtain warning-system inputs from weather stations on their own farms. In some cases, these units automatically process the weather measurements through a warning system algorithm to output spray advisories, thereby saving growers the chore of making the calculations themselves. Do-it-yourself weather monitoring can be risky, however, in part because individuals operating the weather stations often lack training in making meteorological measurements.
In authors' experience, key mistakes are often made in installing on-farm weather monitoring equipment. As a result, tipping bucket rain gauges are not leveled adequately, temperature sensors are exposed to direct sunlight, and LWD sensors are not positioned at appropriate angles or canopy locations (Lau et al., 2000). Such errors can severely compromise the accuracy of weather data inputs to warning systems.
Once data-gathering begins, RH sensors may drift out of calibration, spider webs may immobilize a rain gauge tipping bucket, bird excrement may pile up on LWD sensors, and data-loggers may exhaust their batteries at crucial times. More spectacular misadventures are not uncommon: lightning strikes cause station meltdowns, mowers shred wires, and cultivator tines uproot data-loggers.
Many of these problems are inherent in environmental monitoring, whether at individual do-it-yourself stations or in networks of stations, but can corrupt key weather data if they are not anticipated, prevented, and/or promptly solved. In the authors' experience, owner-operators of on-farm weather stations often lack time to manage weather monitoring effec-tively. Preoccupied with a myriad of other management duties, they may overlook misbehaving weather sensors. Even technologically sophisticated growers may lack sufficient expertise to notice weather equipment failures and diagnose their causes. If a warning system fails to function reliably, end users may mistakenly blame the warning system itself, even when erroneous weather-data inputs may actually be at fault. But growers who have absorbed significant crop losses as a result of apparent warning system failure -for whatever cause -are wary of trying to use the system again. In general, growers stand to lose far more money from a disease outbreak, due to reduced yield or degraded quality of a crop, than they might gain from saving a few pesticide sprays by using a warning system (Turechek & Wilcox, 2005).
Although weather-monitoring equipment has become cheaper and simpler to use, it needs to be operated in such a way that its data are reliable enough for use in warning systems. Sutton et al. (1984) cautioned that those responsible for weather monitoring "would do well to consult with agricultural meteorologists to obtain a basic appreciation of the functioning, calibration, protection, and limitations of instruments, as well as the requirements for maintaining sensors and data-loggers in the field." Unfortunately, this advice is heard too rarely and heeded even less often.
To verify that weather stations and their sensors are working properly, regularly scheduled maintenance and error checking are essential. As a practical matter, this should be done by trained technicians. For a fee, growers can sometimes send their equipment to the vendor or manufacturer for repair and recalibration after each growing season, but in our experience this is seldom done. Furthermore, onceper-year maintenance may not be frequent enough to prevent critical measurement errors from occurring. Although the need for regular maintenance of weather-monitoring equipment is often not emphasized strongly by vendors, or IPM educators, it is vital to reliable long-term performance of disease-warning systems.
Proliferation of low-cost, on-farm weather stations represents both opportunities and challenges for researchers, IPM educators, and farm advisors. Many more growers can now afford to purchase weather stations, and can therefore utilize weather-based disease-warning systems. A resulting challenge is to help growers learn how to install stations properly and maintain them effectively over sustained periods. In some cases at least, sustainable operation of weatherbased warning systems may require reliance on alternative sources of weather data.

Networks of weather stations
Regional networks of automated stations offer an alternative strategy to deliver weather data for warning systems. Typically, measurements are made at multiple weather stations arrayed across a region, downloaded and processed centrally, and the data and/ or warning system advisories are made available to subscribers.
In North America and also in Brazil, operational modes of agriculture-focused weather station networks are diverse. Some are supported primarily by public funds, but may be supplemented by user fees. In North America, private-sector networks may either be vertically integrated (weather stations owned and operated by the same corporations that process data and provide advisories) or more specialized (companies develop and provide advisories for subscribers based on data from public-sector weather stations). In Brazil, agriculture-focused weather station networks are supported mainly by public funds. Privately operated networks of stations are in development in Brazil, but companies that provide weather-based advisories to subscribers currently rely on data from public-sector weather stations.
Historical examples -An early example of a warning system that relied on a network of weather stations was the TOM-CAST warning system for tomatoes, which was deployed in southern Ontario in the late 1980s (Pitblado, 1988). Financing for this network came from the provincial government and a grower organization. Each of the 13 stations deployed across three counties was assumed to provide relevant data for processing-tomato growers up to 16 km away from the station in nearly flat terrain. Once per week during the first month of the growing season and three times per week after fungicide sprays began, scouts drove to each station and downloaded the data-logger to a storage module. Data were then processed at a central location, and TOM-CAST advisories were made available to growers as recorded messages on toll-free phone numbers (Gleason et al., 1995). By the mid-1990s, data were downloaded remotely by telephone links with each station, greatly reducing the network's labor requirements. Currently, data from weather stations are transmitted to a central processing site by radiotelemetry (R. Pitblado, Ridgetown College, Ridgetown, Ontario, Canada, personal communication).
As computer, telecommunication, and internet technologies flowered, new opportunities arose for integrating them into weather networks. One of the first private-sector firms to offer site-specific weather estimates, as well as disease-and pest-warning adviso-ries, was SkyBit Inc. (Russo, 1999). SkyBit collects data from hundreds of publicly owned and operated automated weather stations over much of North America, whose data are available for public use at little or no charge, on a near-real-time basis. For a monthly fee, SkyBit subscribers receive value-added products based partly on these "free" data, including site-specific estimates of weather conditions and pest risks on their farms, developed with Global Positioning System (GPS) and Geographic Information Systems (GIS) software in conjunction with warning system algorithms. These estimates are localized to the vicinity of each farm. The GPS software pinpoints the location of each farm or field. The GIS programs generate interpolations of weather data, and then calculate disease risk, for farms located between stations by adjusting the weather station measurements with information about surface topography, including elevation and aspect. The SkyBit system provides advisories in near-real time, as well as forecasts up to 3 days in advance, at a spatial resolution of about 1 km 2 .
An array of >2,000 automated weather stations in the western U.S. (California, Oregon, Washington, Idaho, and Arizona), owned by a consortium of farmmanagement companies, is the largest private-sector network ever developed for agricultural advisory systems. These stations are spaced 5 to 10 km apart in coastal hills and valleys, and 20 km apart in flatter terrain (Thomas et al., 2002). Data are collected every 15 min by radio telemetry and then sent by phone modems to central sites for processing. Products that include color-coded, regional risk maps for target pests, diseases, and other agricultural risks are distributed via the Internet. Software processing of weather data is conceptually similar to that of SkyBit. Warning system advisories are issued for 12 diseases and six insect pests of grape, melon, tomato, pepper, potato, strawberry, hops, apple, pear, and lettuce.
Reliability of data -Accuracy of site-specific weather estimates varies with the parameter being estimated, among other factors. An assessment of SkyBit estimates for sites in the Midwest U.S. found that estimation error was less for temperature than for LWD (Gleason et al., 1997). Similarly, for the western U.S. site-specific estimation network, Thomas et al. (2002) noted that station-to-station data correlation was highest for temperature, intermediate for RH and dew point, and lowest for LWD. This is not surprising, because air temperature is more conserved spatially than LWD, which is highly variable because it responds to subtle differences in RH, wind speed, cloud cover, and plant canopy architecture (Sentelhas et al., 2005;Sutton, 1977;Pereira et al., 2002). Several efforts have been made to reduce LWD estimation errors in site-specific estimations (e.g, Kim et al., 2002;Sentelhas et al., 2006), in order to more confidently use estimated LWD data as warning-system inputs.
A related factor influencing accuracy of network-based weather data estimates is the distance between stations (Gleason et al., 1995). Ideally, a network would utilize the fewest possible stations consistent with delivering acceptably accurate weather estimates to its clients, in order to deliver as cost effective a service as possible. Among the factors that influence site-location decisions for weather stations are spatial distribution of clients' farms, spatial patterns of variation in weather conditions (Thomas et al., 2002), and the political and economic influence of individual growers.
For operation of disease-warning systems, networks can offer both advantages and disadvantages in comparison to do-it-yourself weather monitoring. For growers, the advantages are readily apparent: the burdens and frustrations of monitoring are lifted, and the convenience of obtaining warning-system advisories -by fax, email, or the web -is dramatically enhanced. These are potent factors in determining whether a warning system is likely to become part of a grower's long-term pest management strategy.
Accuracy of on-site weather monitoring, and therefore its reliability in warning systems, is often assumed to exceed that of site-specific estimation. To many growers and other clients, the physical presence of weather-monitoring hardware in their fields is more reassuring than data transmissions from an invisible (to them) network of stations (Magarey et al., 2001). A U.S. vendor of weather measurement devices has popularized the appealing slogan, "to measure is to know." But to know what, exactly? As discussed above, do-it-yourself weather monitoring is fraught with pitfalls that can compromise the accuracy of measurements. No weather-measurement device is sufficiently foolproof to obviate the need for proper installation and maintenance, but these quality-control factors often receive little consideration from do-it-yourself users.
Networks of weather stations offer potentially compelling advantages over do-it-yourself stations in quality control of weather measurements. Trained staff can be assigned to set up stations, maintain them on a regular schedule, and error-check data to flag potential problem sensors or stations. When stations or sensors abruptly fail, interpolative programs such as kriging (Cressie, 1993) can fill the resulting spatial gaps in site-specific estimation by temporarily using data from the nearest stations in the network. Furthermore, site-specific estimates can be adjusted locally by comparison with station measurements. By this process, sometimes referred to as "nudging" (J. Russo, SkyBit, Inc., personal communication), networks can be trained to learn local patterns of weather variation, and thereby iteratively to improve the accuracy of local site-specific estimates as time passes (Gillespie et al., 1993). As computing power continues to grow and become cheaper, these capabilities are likely to increase. Seen from this perspective, weather station networks appear to offer the best hope for providing durable sources of reliable weather data for warning systems (Magarey et al., 2005).
Weather stations are only as valuable as their quality-control procedures, however. Some weather station networks in North America and Brazil have been dogged by unacceptably high levels of station and sensor failure. Often, these shortcomings are traceable to budget problems. In some cases, networks are built around relatively low-cost monitoring hardware that turns into a financial black hole of maintenance and replacement costs after being deployed. When maintenance budgets shrink, technician training may weaken, maintenance visits to stations become fewer, repair and replacement schedules stretch out longer, data-checking becomes more cursory, and weather data quality drops. As these conditions persist, the risk rises that clients will lose faith in network-based weather systems. The "garbage-in-garbage-out" warning is as meaningful for weather station networks as for do-it-yourself monitoring.
The future of site-specific weather data estimation -Additional technologies appear likely to enhance the accuracy of site-specific weather estimates in the near future. For example, ground-based radar, currently used primarily in avionics, also has the capability of locating areas of rainfall in near-real time. Local areas of rainfall, caused by small-scale convective systems, are among the most elusive phenomena for weather station networks, since these systems can drop substantial rainfall on swaths that are only a few kilometers wide, and thus may be invisible to stations situated on a broader-scale grid. By timing the return rate of radar reflections from precipitation, it is possible to track the passage of rainfall across a landscape. Although rainfall amount is more difficult to determine than its timing of occurrence due to radar attenuation errors, rainfall occurrence can help to pinpoint leaf wetness duration, which is a key input to diseasewarning systems. Preliminary attempts have been made to use radar estimates of rainfall as inputs to disease-warning systems (Rowlandson, 2006).
Satellite-based scans of the earth's surface may eventually enhance site-specific weather estimation. As the spatial resolution of these scans increases, they may be able to fill gaps in cloud-cover or albedo data that currently hamper estimation of net radiation, and thus the application of energy balance models to estimating LWD associated with dew periods.

The challenge of timeliness
The need for timeliness cuts across all methods of obtaining weather data inputs for warning systems. A key question is whether growers have sufficient time to respond effectively to spray advisories once they are issued. A fungicide-spray warning is of little use to a grower who cannot apply the needed spray in a timely manner due to rainfall, impassably muddy terrain, excessive wind, or re-entry interval restrictions on fungicides. On some farms, applying a single spray to the entire crop may require several days to a week due to the large size of the planting, speed and number of sprayers, labor availability, or other factors.
Another constraint on timeliness is the mode of action of the pesticides used. Many of the most affordable fungicides and bactericides have a contact mode of action, meaning that they are effective against target microorganisms only on the outer surfaces of plants. Once a fungus or bacterium has invaded a plant, contact fungicides cannot stop the infection process. Pesticides with so-called penetrant (also known as systemic) modes of action can stop infections that have already started, but the time window for this eradicative activity is limited to a few hours or days.
These logistical constraints on timeliness mean that warning systems will be accepted only when growers have adequate time to respond to advisories. When growers operate their own weather stations, they may overshoot action thresholds if, as sometimes happens, they do not retrieve data sufficiently often. Although networks of stations may process data more frequently, a lag of 8 to 12 hours between weather measurements and issuance of advisories is not uncommon due to time requirements of data retrieval and processing from multiple stations.
These time constraints have prompted increased efforts to use forecasted weather data, from one to several days into the future, as warning system inputs (Johnson et al., 2004;Kim et al., 2006;Thomas et al., 2002;Magarey et al., 2001Magarey et al., , 2002. Although forecasting is gradually becoming more accurate as weather-monitoring technology improves, predicting the future entails an additional error factor beyond those inherent in making weather measurements and model-based estimates. The further into the future a prediction is made, the less accurate it can be expected to be. Consequently, using forecasts to operate disease-warning systems necessitates a trade-off between timeliness and accuracy.

The special case of leaf wetness duration
Leaf wetness duration is the period of time during which free water -from dew, rainfall, fog, or irrigation -is present on the aerial surfaces of crop plants. LWD is a non-standard meteorological parameter because it is a property of surfaces as well as the atmosphere. There is no widely accepted standard for calibrating LWD sensors, and the vast majority of automated weather stations worldwide do not include these sensors. Nevertheless, LWD is an important determinant of the development of many foliar and fruit diseases through its influence on such key processes as pathogen germination, infection, and sporulation (Yarwood, 1978;Huber & Gillespie, 1992). As a result, LWD is an input to numerous disease-warning systems. The subject of measuring and estimating surface wetness on plants was reviewed recently by Margarey et al. (2005).
Leaf wetness duration is the most spatially heterogeneous weather input to warning systems. It varies not only with weather conditions but also with the type of crop, its developmental stage, and the position, angle, and geometry of individual leaves (Sutton et al., 1984). During dew periods, different micro-sites on a single leaf can vary in LWD by several hours per day. This immense heterogeneity poses a formidable challenge for measuring or estimating LWD. If using LWD sensors, where should they be placed? If using estimates, what part of the canopy should be used as the reference point?
Measuring LWD -Many types of sensors have been employed to measure LWD. Among the first were mechanical sensors utilizing strings that were maintained under slight tension during exposure (Sutton et al., 1984). As it became wetter or drier the string would contract or relax, resulting in movement of an attached ink-pen marker on a revolving paper chart.
By the mid-1980s, mechanical LWD sensors and strip charts were superseded by electronic sensors and automated data-loggers. Most electronic sensors were based on a design developed by Davis & Hughes (1970), which sensed the presence of wetness as a drop in electrical resistance across two adjacent circuits etched onto a printed-circuit board. Later workers found that coating these sensors with latex paint enhanced their sensitivity to wetness (Gillespie & Kidd, 1978;Sentelhas et al., 2004a) (Figure 1), that height and angle of deployment influenced sensitivity (Figure 2), and that electronic sensors could also be fabricated as cylinders rather than flat plates (Gillespie & Kidd, 1978;Gillespie & Duan, 1987;Lau et al., 2000;Sentelhas et al., 2004b;Sentelhas et al., 2007a). A recent innovation in flat-plate sensor design oper-ates on the principle of electrical capacitance rather than resistance (Decagon Devices, Inc., Pullman, WA, U.S.A.).

Sources of measurement error -
Since there is no consensus on how or where to deploy LWD sensors for use in warning systems, users position them in a variety of ways, even within the same crop canopy. This uncertainty can add to the potential for error in do-it-yourself weather monitoring. Magarey et al. (2005) suggest that, if using a single LWD sensor to estimate LWD in a crop canopy, the optimal position would be just below the top of the canopy, but fully exposed to the night sky. The rationale for this recommendation is that this position can be expected to provide "worst case" or maximum LWD readings (Jacobs et al., 1990;Potratz et al., 1994), since dew  duration is greatest at the top of the canopy, and placement just below the top of the canopy would slow morning drying by providing some protection from wind and sunlight. Heterogeneity of LWD within a crop canopy can vary with the type of crop canopy and with climate. Sentelhas et al. (2005) found statistically significant differences in LWD between the top and bottom of apple canopies in Iowa, U.S., and of maize canopies in Ontario, Canada. In a coffee plantation in São Paulo State, Brazil, however, there were no differences in LWD among canopy positions. A reason for this difference with apple and maize canopies is that the coffee plants are of conical shape, which exposes leaves to the sky at all canopy heights, so that dew periods began almost simultaneously at all leaf positions. A trend toward greater LWD at the bottom than the top of the canopy -a difference of about 1.5 hr day -1was explained primarily as the result of differences in dry-off; during the morning, the top of the coffee plants received more total solar radiation and stronger wind than the other positions, resulting in a more rapid dry-off. For grapes in São Paulo State that were cultivated in a north-south hedgerow system, LWD did not differ between the top and inside the canopy (Sentelhas et al., 2005). The well pruned, hedgerowlike canopy structure probably allowed all sensors to cool at approximately the same rate during the night, and therefore to form dew at almost the same time. During the morning, there was sufficient space between rows that all the sensors received about the same influence of sunshine and wind. These results emphasize that canopy architecture is an important factor to consider when deciding where to place LWD sensors, and how many sensors to deploy.
Within-canopy LWD heterogeneity can impact performance of a disease-warning system in some situations. Batzer et al. (2008) found substantial LWD variability within the canopy of mature, semi-dwarf apple trees (4 to 5 m tall) in Iowa (Figure 3). When they simulated performance of a warning system for  sooty blotch and flyspeck (SBFS) disease that was based on cumulative hours of LWD, the date on which the action threshold was reached varied by as much as one month depending on where a LWD sensor was located within the canopy. Timing errors of this magnitude could be expected to lead to failures of the warning system. Even at different lateral positions at the base of the apple canopy, cumulative LWD varied by as much as 44%. These micro-site differences were much less pronounced during rainfall-associated wet periods than dew periods, because rainfall tends to minimize these differences. The findings suggested that canopy heterogeneity in LWD is likely to be accentuated in continental climates where LWD is predominantly caused by dew rather than rain. A field experiment by Duttweiler et al. (2008) emphasized this potential problem in using LWD measurements in large crop canopies to operate a warning system. Duttweiler monitored LWD from sensors located at 1.5-m height near the base of the canopy of 4-to 5-m-tall apple trees during 19 site-years in orchards in Iowa and Wisconsin in the Upper Midwest U.S., and in North Carolina in the Southeast U.S. For the Upper Midwest sites, using these LWD data as inputs to the SBFS warning system resulted in substantial timing errors in determining the action thresholdboth false-positive errors (threshold reached too early) and false-negative errors (threshold reached too late) (Figure 4). Substituting RH data measured at the same within-canopy position, she found that cumulative hours of RH>97% reduced both types of errors in comparison to LWD. No such predictive advantage of cumulative RH hours was evident in data from North Carolina, however. Because RH is more spatially conserved than LWD, it may be preferable to LWD for use in warning systems that rely on measurements in large-scale crop canopies. However, this potential advantage of RH may apply primarily in climates where LWD is caused mainly by dew, rather than in climates like that of western North Carolina, where most LWD is associated with rainfall. Thus, crop canopy and climate can interact in influencing the choice of moisture-related parameter for input to a warning system.
One way to reduce the spatial heterogeneity of in-canopy LWD measurements is to deploy more sensors. Based on measurements in a grape canopy in New York State, Magarey (1999) calculated the number of electronic LWD sensors needed to provide 95% certainty that daily LWD estimation errors were within an acceptable range. Not surprisingly, deploying more sensors resulted in smaller errors. In practice, the number of sensors deployed is constricted by cost, and one LWD sensor per crop canopy is often the norm.
Locating LWD sensors outside a crop canopy can circumvent some of the mechanical risks associated with in-canopy measurements, such as damage from mowers, sprayers, and cultivators. Furthermore, the potential for measurement artifacts associated with canopy heterogeneity is eliminated. Sensors and dataloggers may also be easier to access and service if they are located outside pesticide-spray zones and the associated restrictions of pesticide re-entry interval regulations. An additional practical consideration is that, for annual crops, data-loggers and sensors outside crop canopies may not need to be moved with each cropping season in concert with crop growth patterns and annual rotation patterns, reducing costs and labor. Zhang & Gillespie (1990) showed that LWD measurements made at a nearby weather station could be calibrated to in-canopy LWD with acceptable accuracy. Sentelhas et al. (2004bSentelhas et al. ( , 2005 demonstrated that LWD measurements made at 30-cm height over turfgrass were quite similar to those measured at the top of five types of crop canopies -apple, coffee, maize, grape, and muskmelon -despite different canopy height, architecture, and climate. These findings suggest that data from nearby weather stations can be used as surrogates for in-canopy LWD measurements in disease-warning systems. Modeling LWD -Faced with the frustrations of LWD measurement, agricultural meteorologists and plant pathologists have developed models to estimate LWD. These efforts, spanning more than 50 years, were reviewed by Huber & Gillespie (1992) and Magarey et al. (2005).
Models for estimating LWD are often classified into two general types, physical and empirical. Physical models utilize energy balance principles to simulate water deposition and dry-off on plant surfaces (Monteith, 1957;Monteith & Unsworth, 1990), based on air temperature, RH, wind speed, solar radiation, cloud cover, and sometimes other variables.
Among the advantages of physical models is that they can be highly accurate (±1 h day -1 ) (Pedro Junior & Gillespie, 1982). In addition, because they are based on physical principles, they have potential to be readily portable among climates and regions (Magarey et al., 2006). A serious limitation for practical application is that many physical models require certain input parameters, such as net radiation, cloud cover, or infrared radiation, that are not available at most weather stations. On the other hand, physical models have been applied successfully to the practical problem of obtaining in-canopy LWD data for use in warning systems. For example, Sentelhas et al. (2006) showed that a three-step process could be used to derive in-canopy LWD estimates from nearby weather stations that lacked LWD sensors ( Figure 5). First, a Penman-Monteith energy balance model was used to derive LWD estimates at a weather station located over turfgrass. Next, the estimated LWD was correlated to measured LWD at the top of the canopies of several types of crops. Finally, a correction factor was applied. These estimates were accurate to within 1 h day -1 , which is well within the variability associated with in-canopy sensor measurements (Magarey et al., 2001).
Empirical modeling takes a different approach, using statistical best-fit algorithms to help choose parameters and functions that yield the most accurate estimates of LWD. For example, empirical approaches have used classification and regression trees , neural networks (Francl & Panigrahi, 1997), and fuzzy logic (Kim et al., 2004).
Compared to physical models, empirical LWD models possess a different set of positives and negatives for application to warning systems. Some of these models depend on relatively few input parameters (e.g., Gleason et al., 1994), and therefore can be utilized in more locations than physical models that are more input-heavy. On the other hand, although empirical models can be highly accurate for sites and regions where they are developed, they may not be as portable as physical models because empirical models depend on best-fit rather than physical principles. However, Sentelhas et al. (2007b), in validating an empirical model that used hours of RH>90% as a surrogate for LWD, showed that adjusting the RH threshold to account for regional climate differences resulted in LWD estimates that were as accurate as those derived from physical models.  The distinction made between physical and empirical models is an oversimplification. In fact, most models are hybrids of the two approaches. For example, the "physical" Penman-Monteith energy balance model incorporates parameters whose values are determined empirically. Conversely, the "empirical" CART/SLD model  and fuzzy logic models (Kim et al., 2004(Kim et al., , 2006 blend physical principles with statistical methods. In the CART/SLD and fuzzy logic approaches, starting with physical principles that govern the presence or absence of water on plant surfaces increased the likelihood of deriving models with acceptable predictive accuracy. Using this intuitive approach can also make hybrid models simpler to explain to non-meteorologists than purely physical or empirical models Kim et al., 2002).
Hybrid models can also be portable to regions far from those where they were developed. For example, Kim et al. (2005) observed that a simple correction factor enabled a fuzzy logic model to estimate LWD with acceptable accuracy in a tropical climate (Costa Rica), even though the original model had been developed in the Temperate Zone (Midwest and Great Plains of U.S.). Similar results were obtained by Sentelhas et al. (2004c) when estimating LWD with the CART/SLD model for a cotton crop in São Paulo State, Brazil. As in the RH work of Sentelhas et al. (2007b), these findings demonstrate that adaptation of LWD estimation models with empirical characteristics to new climates and regions can be accomplished relatively easily.

CONCLUSION
There is no single "best" method to acquire weather data for use in disease-warning systems. Instead, local, regional, and national circumstances may influence the sustainability of each strategy. Magarey et al. (2002) present a vision for implementing agricultural decision-support systems that is relevant to the narrower focus of the present paper. They point out that consortia of public-sector researchers and extension educators, private-sector service providers, and growers are likely to be the most sustainable arrangements for introducing and sustaining decision-support systems in North America, and the same generalization may hold for obtaining weather data inputs to warning systems. Researchers can undertake studies that lay the groundwork for obtaining weather databy developing sensors and validating their performance in the environment, pioneering models that estimate weather data, and developing new ways to extend the spatial domain of weather station measurements. Ex-tension educators can help to insure that growers' voices are heard as a warning system's design develops and matures. Private-sector companies can package the products of academic research and extension into formats that fit grower needs, while profiting from the services they provide. Growers determine which commercial services are sufficiently valuable and practical to succeed, and provide feedback for improvements to extension personnel, researchers, and service providers.
This is a compelling vision, but other modes of obtaining weather data may be more successful in some circumstances. The do-it-yourself model may flourish for growers who are committed to meeting quality-control and other management requirements. Alternatively, a predominantly public-sector model may be more practical where the private sector does not offer suitable weather-estimation services.
Reliability of weather data inputs is the backbone for sustainability of any of these schemes. Proper installation and timely maintenance of weather instrumentation, along with diligent error-checking of weather data, are unglamorous but inescapable requirements for implementing any disease-warning system successfully. This means that adequate training of weather-monitoring personnel is essential. In addition, convenience and simplicity of use will determine whether a warning system makes the transition from the toolbox to the real world of crop disease management.