12 Remote Sensing of Submerged Aquatic Vegetation

Hyun Jung Cho1, Deepak Mishra2,3 and John Wood4 1Department of Integrated Environmental Science, Bethune-Cookman University, Daytona Beach, FL 2Department of Geosciences, Mississippi State University 3Northern Gulf Institute and Geosystems Research Institute, Mississippi State University, MS State, MS 4Harte Research Institute for Gulf of Mexico Studies, Texas A&M University-Corpus Christi, Corpus Christi, TX USA


Introduction
Remote sensing has significantly advanced spatial analyses of terrestrial vegetation for various fields of science.The plant pigments, chlorophyll a and b, strongly absorb the energy in the blue (centered at 450 nm) and the red (centered at 670 nm) regions of the electromagnetic spectrum to utilize the light energy for photosynthesis.In addition, the internal spongy mesophyll structures of the healthy leaves highly reflect the energy in the near-infrared (NIR) (700-1300) regions (Jensen, 2000;Lillesand et al., 2008).The distinctive spectral characteristics of the green plants, low reflectance in the visible light and high reflectance in NIR have have been used for mapping, monitoring and resource management of plants; and also have been used to develop spectral indices such as Simple Vegetation Index (SVI = NIR reflectance -red reflectance) and Normalized Difference Vegetation Index (NDVI = (NIR reflectance -red reflectance)/(NIR reflectance + red reflectance)) (Giri et al., 2007).
The simplicity and flexibility of vegetation indices allow comparison of data obtained under varying light conditions (Walters et al., 2008).NDVI was first suggested by Ruose et al. (1973) and is one of the earliest and most popular vegetation index used to date.It is usually applied in an attempt to decrease the atmospheric and surface Bidirectional Reflectance Distribution Function (BRDF) effects by normalizing the difference between the red and NIR reflectance by total radiation.Index values have been associated with various plant characteristics, including vegetation type (Geerken et al., 2005), vegetation cover (du Plessis, 1999), vegetation water content (Jackson et al., 2004), biomass and productivity (Fang et al., 2001), chlorophyll level (Wu et al., 2008), PAR absorbed by crop canopy (Goward & Huemmrich, 1992), and flooded biomass (Beget et al., 2007) at a broad span of scales from individual leaf areas to global vegetation dynamics.

Remote sensing of submerged aquatic vegetation (SAV) 2.1 Submerged aquatic vegetation (SAV)
Submerged aquatic vegetation (SAV) is a group of vascular plants that grow underwater which can grow to the surface of, but not emerge from shallow waters.SAV includes seagrass species that are a vital component of ecological processes, dynamics, and productivity of coastal ecosystems.Healthy beds of SAV provide nursery and foraging habitats for juvenile and adult fish and shellfish, protect them from predators, provide food for waterfowl and mammals, absorb wave energy and nutrients, produce oxygen and improve water clarity, and help settle suspended sediment in water by stabilizing bottom sediments (Jin, 2001;Findlay et al., 2006).Assessment of SAV distribution, composition, and abundance has been of a particular interest to coastal environmental managers, scientists, developers, and recreational users as the information serves as an excellent indicator of aquatic environmental quality.

Remote sensing of underwater habitats
Remote sensing is a valuable tool for monitoring benthic habitats such as SAV, benthic algae, and coral-reef ecosystems, and several researchers have tested airborne and spaceborne sensor systems for such studies (e.g., Mishra et al., 2005).Spatial resolutions of these systems range from 30 m for the Landsat Thematic Mapper (TM) to 2.44 m for QuickBird multispectral data and 1 m or less for airborne hyperspectral data.Those evaluating the utility of TM have mapped subtidal coastal habitats (Khan et al., 1992), delineated sand bottoms (Michalek et al., 1993), classified coral reef zones (Mishra et al., 2005(Mishra et al., , 2006)), evaluated the benthos (Matsunaga & Kayanne, 1997), and performed time series analyses (Dobson & Dustan, 2000).Similarly, researchers have used IKONOS (4 m) and QuickBird (2.44 m) imagery with radiative transfer models to map benthic habitats (Mishra et al., 2005(Mishra et al., , 2006) ) and apply a similar model to Airborne Imaging Spectroradiometer for Applications (AISA) hyperspectral data to identify benthic habitats (Mishra et al., 2007).
Mapping of SAV using satellite data has focused on supervised and unsupervised classifications based on signal variations in the multispectral bands, especially those in the short visible wavelengths with high water penetration (Ackleson & Klemas, 1987;Lyzenga, 1981;Marshall & Lee, 1994;Maeder et al., 2002;Ferguson & Korfmacher, 1997;Pasqualini et al., 2005).The NIR region is seldom used due to its high attenuation in water.When SAV beds are dense, the water is clear, and depth and sediment relatively constant, fine-scale spectral variation is often overlooked during classification.In other cases, the radiative transfer model is used to correct the solar angle, atmospheric perturbation, substrate type, and depth, but requires extensive in situ measurements (Zimmerman & Mobley, 1997).
Most of the currently available radiative transfer models or physics-based models have been applied to map benthic features in relatively clear aquatic environments (i.e.relatively deep, pristine coral reefs or seagrass meadows) and do not adequately correct for the strong NIR absorption by water (Mumby et al., 1998;Holden & LeDrew, 2001;Holden & LeDrew, 2002;Ciraolo, 2006;Brando et al., 2009).However, NIR reflectance serves as the primary cue for discriminating vegetation type and as the critical component for the widely used vegetation indices.

Dilemmas in remote sensing of shallow aquatic system and SAV
Remote sensing of benthic habitats is complicated because of several factors including (1) atmospheric interferences, (2) variability in water depth, (3) water column attenuation, and (4) variability in bottom albedo or bottom reflectance.In the case of aquatic remote sensing, the total signal received at satellite altitude is dominated by radiance contributed by atmospheric scattering, and only 8-10% of the signal corresponds to the water reflectance and reflectance from benthic features (Kirk, 1994, Mishra et al., 2005).Therefore, it is advisable to correct for atmospheric effects to retrieve any quantitative information for surface waters or benthic habitats from satellite images.Therefore, the lack of a rigorous absolute atmospheric correction procedure can introduce significant errors to a satellite derived benthic habitat map.There is also a tendency among benthic mapping researchers to use a relative atmospheric correction procedure such as a deep-water pixel correction, especially when local aerosol data and validation data are lacking.This often causes mediocre classification results.
Knowledge of the optical properties of the water column can help eliminate changes in reflectance attributable to variable depth and water column attenuation effects, which often lead to misclassification of the benthos (Mishra et al., 2005).Mishra et al ( 2005) proved that to derive accurate bottom albedo or bottom reflectance using a radiative transfer model, water depth and water column optical properties (absorption and backscattering) should be known for the study area.Knowledge of optical bottom albedo for shallow waters is necessary to model the underwater and above-water light field, to enhance underwater object detection or imaging, and ultimately to determine the distribution of benthic habitats (Gordon & Brown, 1974).Mishra et al (2005Mishra et al ( , 2006Mishra et al ( , 2007) also point out that the signals measured by a sensor from above the water surface of a shallow marine environment are highly affected by phytoplankton abundance (chlorophyll absorption), water column interactions (absorption by water and scattering by suspended sediments), and radiance reflected from the bottom.For the bottom contribution to be retrieved by a sensor the water column contributions have to be removed and the optical properties have to be known or at least be derivable.However, it is very challenging to measure these optical properties accurately because of logistical issues and instrumentation errors, which also leads to an inaccurate benthic mapping project.
Variability in bottom types and hence albedo gives rise to a mixed spectral response that often reduces the classification accuracy.Specific problems such as complex benthic combinations (e.g., sandy areas with variable amounts of algal cover; variation in color, texture, size), and error in depth estimation can also have a considerable impact on the classification results.Mishra et al (2005) proposed several solutions to increase the number of elements separable by a classification scheme and the classification accuracies including an extensive field campaign acquiring substantial samples to enable statistical evaluation for each class and deriving detailed ecological and biological information for each in situ data point.Close-range hyperspectral studies that may aid in discriminating between different types of benthic features can be used to develop baseline spectra to help minimize spectral confusion in satellite imagery.
Shallow littoral areas (generally the areas between the shoreline to a water depth 2 m) are one of the most productive habitats, yet the most sensitive landscape to human-induced environmental alteration and global climate changes.Modeling of optical water properties for the littoral zone is more complicated due to rapidly changing water depth and/or substrate and higher amounts of Colored Dissolved Organic Matter (CDOM) and/or suspended particles (phytoplankton, seston, and inorganic particles) compared to the deeper portions of oceans.In addition, bottom backscattering in the shallow areas is more significant, which makes the NIR signals more important, especially in areas that contain substantial amount of seagrasses, benthic algae, or phytoplankton (Kutser et al., 2009) and that the conventional Beer-Lambert's exponential light attenuation with depth is not applicable (Holden & LeDrew, 2002).
Upwelling signals from water bodies contain several components including reflectance from water surface, water column (suspended matter), and bottom backscattering (SAV and substrate) (Spitzer & Dirks, 1987).The aforementioned conventional vegetation indices also are not effectively used for plants that grow underwater or that are temporarily flooded (Beget & Di Bella, 2007;Cho et al. 2008) because the water overlying the vegetation canopies reduces the vegetation effects of 'red absorption' and the 'NIR reflectance' (Han & Rundquist, 2003;Cho, 2007;Cho et al., 2008;Fig. 1).Differentiation of the SAV spectral signature from bare substrate or algae is further limited in shallow coastal waters that are more turbid than open ocean waters (Bukata, 1995) due to higher levels of phytoplankton, suspended sediment, and dissolved color.According to our on-going study using hyperspectral data obtained over both experimental tanks and field seagrass habitat, the SAV signal rapidly decreases as water depth increases, and almost completely disappears within a depth of 0.5 meter in even mildly turbid waters (turbidities > 12 NTU).Fig. 1.Depth-induced reflectance variation of submerged aquatic vegetation (SAV) in clear water between 10 cm and 50 cm above the SAV canopy.The line for the highest reflectance is at 10 cm and the reflectance continuously decreases with water depth increases.

SAV mapping using hyperspectral data
Two decades ago, only spectral remote sensing experts had access to hyperspectral images or the software tools necessary to take advantage of such images.Over the past decade, hyperspectral image analysis has matured into one of the most powerful and fastest growing technologies in the field of remote sensing (Phinn et al., 2008).While multispectral Reflectance (%)

Wavelength (nm)
remote sensing systems detect radiation in a small number of broad regions of the electromagnetic spectrum, hyperspectral sensors acquire numerous very narrow, contiguous spectral bands throughout the visible, near-infrared, mid-infrared, and thermal infrared portions of the electromagnetic spectrum for every pixel in the image, yielding much more detailed spectral data (Govender et al., 2009).
Collection and processing of hyperspectral imagery can be quite costly, depending on the size of the area to be studied.In order for the imagery to be usable for sub-aquatic analysis, the following guidelines are suggested by Finkbeiner et al. (2001): The best time of year for collecting hyperspectral imagery may occur in early summer, during the season of maximum biomass, and when there is less epiphytic coverage.


The imagery should be collected when turbidity is low; this is often during times of low or no winds.High turbidity may also be caused by heavy rains, winds on previous days, and localized dredging.Often, boat traffic may cause a localized but far-spreading plume of turbidity, as sediments are re-suspended.


Winds can also cause problems other than turbidity, such as wrack lines, debris lines, whitecaps, and areas with unacceptable amounts of glint.As a general rule, winds less than 8 kph are acceptable, winds between 8-15 kph may be acceptable depending on the locality, and winds higher than 15 kph are generally unacceptable. Tidal stage can play an important role in the success of imagery collection.Consult local and/or NOAA tide gauges to plan for acquisition within 2 hours of the lowest tide for the collection area, unless the estuary drains an area of highly turbid or tannic water, in which case, a rising tide may be desirable.


Collection times should be planned to adjust for sun angle, to avoid both sun glint and shadows.As a general rule, sun angles between 30º and 45º are recommended; different sensors may allow or require more or less angle.


Clouds and haze create areas of shadows and distortion as well as white or gray streaks in the imagery, and should be avoided as much as possible.


Field work should occur simultaneously with the sensor flight.Since it is virtually impossible to collect all the field data needed for signature development and accuracy assessment in the same time frame as the flights occur, every effort should be taken to gather field measurements as close to the actual flight as possible, and under similar conditions. Field data should include measurements of reflectivity, turbidity, empirical or anecdotal data on epiphytic coverage, bottom type and reflectance, classification of the field point, and precise location.Locate these field measurements within a large enough patch that there will be no ambiguity, and consider the spatial sphere of uncertainty.
For instance, if the imagery will have a radiometric accuracy of approximately 2 m, the location should be consistent out to a four meter radius.
The unique spectral signatures of vegetation are often used as training data for hyperspectral imagery classification.Chlorophyll and other pigments are found in SAV as in other photosynthesizing vegetation, however, the ratio of these to each other will differ by species, as well as with changes in conditions and stressors (Govender et al., 2009).
While these minor differences can be detected above the surface in spite of epiphytic coverage (Fyfe, 2003), detection of these differences below the surface may be hampered or dampened by the effects of the water column.Depth, water clarity or turbidity, organic and inorganic materials within the water column, the surface of the water itself, and physical properties such as the absorption of energy in the NIR and beyond can all affect the ability to discriminate the relatively small differences in ratios of accessory pigments and chlorophyll (Kutser 2004).
3. Case study in SAV mapping using hyperspectral data

Hyperspectral algorithm to correct overlying water effects
A new water-depth correction algorithm was developed to improve detection of underwater vegetation spectra signals.The algorithm was developed conceptually, calibrated, and validated using experimental and field data.The conceptual model was based on the idea that the upwelling signals measured from a water surface is the sum of the energy reflected from the water surface, the water column, and also from the water bottom surface.The energy reflected from the water surface and the water column (the volumetric reflectance) was combined as a single term because the surface reflectance is a constant and does not change with water depth changes (Lu & Cho, 2011).
The effects of the overlying water column on upwelling hyperspectral signals were modeled by empirically separating the energy absorbed and scattered by the water using data collected through a series of controlled experiments using hypothetical bottom surfaces that either 100% absorbs or 100% reflects (Cho & Lu, 2010).Later, the white surface (the 100% reflecting surface) was replaced with a gray surface with a known reflectance to reduce problems associated with enhanced multi-path scattering (Lu & Cho, 2011).The experimental setting allowed the calculation of water absorption and scattering values for up to a water depth of 60 cm, and the light remaining at water depths that were beyond the experimentally measured points were estimated by establishing the mathematical relationships between water depth and the vertical attenuation coefficient (K d ) derived from the experimental data (Washington et al., 2011).The depth-and wavelength-dependent water absorption and volumetric scattering factors (0-100 cm; 400 -900 nm) were calculated and applied to independently-measured underwater vegetation signals and airborne hyperspectral data taken over shallow seagrass beds, to remove the effects of the overlying water.

Successful water correction in the infrared region
The empirically driven correction algorithm significantly restored the vegetation signals, especially in the NIR region, when applied to independently measured reflectance of underwater plants taken over indoor and outdoor tanks (Cho & Lu, 2010;Washington et al. 2011).The algorithm was also successful in restoring the NIR signals originating from seagrass-dominated sea floors when applied to airborne hyperspectral data of Mississippi and Texas coastal waters (Cho et al., 2009;Lu & Cho, 2011; Fig. 2).As stated earlier, NIR reflectance serves as the primary cue for discriminating benthic vegetation from other substrates.Due to the restored NIR reflectance, the correction algorithm increased the NDVI values for the seagrass pixels (Lu & Cho 2011).

Ground truth data collection
Several hundred ground data points were collected over seagrass beds in Redfish Bay, Texas, in the summer of 2008 (June -July).Seagrass species makeup, water depth, vegetation percent coverage, and bottom substrate type were recorded at each site.Site location was recorded to accuracies within 1 m using a Real Time Kinetic (RTK) GPS.When necessary, the preselected random sites were shifted to avoid dry or unreachable locations.
The field collected data were entered into a spatial database along with descriptive attributes to help determine which class each sampling site would be assigned to.Data points were then randomly divided into training or accuracy assessment points.

Image processing and vector classification
Image data were obtained in 63 bands of the AISA Eagle Hyperspectral sensor over the seagrass beds in October 2008 and corrected for atmospheric effects.Since selection of the proper bands for analysis helps reduce noise introduction and processing burden (Borges et al. 2007), several selection techniques were used within this project, including Principal Component Analysis and regression analysis.Ultimately, seven bands recommended by Fyfe (2003) were used.To reduce noise, these were again reduced to 5 bands, as recommended by Cho et al. (2009).
Image segmentation is performed prior to image classification.Segmentation groups like pixels into homogenous areas.Initially, an unsupervised classification using ISODATA (Iterative Self Organized Data Analysis Technique A) (De Alwis et al., 2007) was performed.
After the initial image classification, each segmented vector in the output was assigned to a seagrass/substrate class (i.e.Thalassia testudinum, Halodule wrightii, Ruppia maritima, Mixed Beds, Bare, or Unclassified).Those which contain only one type of point ('Halodule') were considered to be finally classified.Those classified as mixed or unknown were removed from the classified vector set, a mask created of their spatial footprint, and the entire classification procedure re-run on the image for that footprint area only.When it became impossible to further classify the image by this method, a supervised classification was performed, using a selection of training points as training data, which produced monospecific vectors.The same mixed-method classification procedure was performed to the image data after the water correction algorithm was applied.

Improved accuracy assessment
The water correction algorithm improved the classification accuracy results in an image subset from an overall accuracy of 28% to approximately 36%.Identification of the species Halodule wrightii improved from 33% user's accuracy to almost 78%, and Thalassia testudimun from 0% users accuarcy to almost 17%.Although these numbers appear to be somewhat low, several factors must be considered: this analysis only used a subset of the imagery, which allowed the area analysed to have less variation, but there was also less training and accuracy assessment data to work with.
Using the full set of imagery and the combination of supervised and unsupervised classification, results have improved considerably; we have achieved overall accuracies of over 60%.In addition, we have calculated seagrass presence/absence using the complete corrected imagery, achieving overall accuracies of over 95%.With the future addition of in situ turbidity and bathymetric data, these accuracies should continue to improve.

Current efforts in developing water correction module and graphical user interface
We have continued improving the algorithm by including turbidity (measured in NTU, Nephelometric Turbidity Unit) as an additional function.In addition, the water correction algorithm is currently being implemented as a module that can be called from the ENVI (ENvironment for Visualizing Images, ITT Visual Information Solutions)'s programming environment using a Graphical User Interface (GUI) (Gaye et al. 2011).Under the current module and the GUI, users are able to select water depth (0 -2.0 m) and turbidity (0-20 NTU) using slide bars, for which (a) given hyperspectral band(s) can be corrected.The corrected reflectance can be generated and compared to the original one at either a given pixel, within a small subset; and the original and corrected images can be displayed.

Conclusion
Remote sensing has significantly advanced spatial analyses on terrestrial vegetation for various fields of science.However, mapping of benthic vegetation or submerged aquatic vegetation (SAV), using remotely sensed data is complicated due to several factors including atmospheric interferences, variability in water depth and bottom albedo, and water column attenuation by scattering and absorption.Hence, correction for the atmospheric and the overlying water column effects is necessary to retrieve any quantitative information for SAV from satellite and airborne images, especially when using hyperspectral data.Significant misclassification of the SAV often occurs due to the lack of information on in situ water depths and water column optical properties.Most of the currently available radiative transfer models only work well when applied to mapping of benthic features in relatively clear aquatic environments, but they do not correct for strong water absorption of the near infrared energy.The fluctuating water depths, high amounts of suspended particles and colored dissolved organic matter in shallow littoral zones make it even more challenging to map benthic vegetation using remotely sensed data.A new waterdepth correction algorithm was developed conceptually, and calibrated and validated using experimental and field data.The effects of the overlying water column on upwelling hyperspectral signals were modeled by empirically separating the energy absorbed and scattered by the water using data collected through a series of controlled experiments.The empirically driven algorithm significantly restored the vegetation signals, especially in the NIR region.Due to the restored NIR reflectance, which serves as the primary cue for discriminating SAV from other substrates, use of the water corrected airborne data increased the NDVI values for the SAV pixels and also improved the seagrass classification accuracy.Our continuing efforts to incorporate turbidity and CDOM into the algorithm, in developing a graphical user interface, and in implementing the algorithm into a module that can be called from commercially available image processing software promise a userfriendly application and wide use of the algorithm in the near future.

Fig. 2 .
Fig. 2. The original (left) and water-corrected (right) airborne AISA Eagle hyperspectral image at 741 nm obtained over seagrass beds in Redfish Bay, TX in 2008.