Next Article in Journal / Special Issue
Estimation of Total Biomass in Aleppo Pine Forest Stands Applying Parametric and Nonparametric Methods to Low-Density Airborne Laser Scanning Data
Previous Article in Journal
De Novo Sequencing and Assembly Analysis of Transcriptome in Pinus bungeana Zucc. ex Endl.
Previous Article in Special Issue
Quantifying Boreal Forest Structure and Composition Using UAV Structure from Motion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shifts in Forest Structure in Northwest Montana from 1972 to 2015 Using the Landsat Archive from Multispectral Scanner to Operational Land Imager

1
Department of Land Resources & Environmental Sciences, Montana State University, PO Box 173120, Bozeman, MT 59717, USA
2
USDA Forest Service, Rocky Mountain Research Station, 800 E. Beckwith, Missoula, MT 59801, USA
3
College of Earth, Ocean, and Atmospheric Sciences, Oregon State University, 104 CEOAS Administration Building, Corvallis, OR 97331, USA
4
USDA Forest Service, Pacific Northwest Research Station, 3200 SW Jefferson Way, Corvallis, OR 97331, USA
*
Author to whom correspondence should be addressed.
Forests 2018, 9(4), 157; https://doi.org/10.3390/f9040157
Submission received: 15 February 2018 / Revised: 15 March 2018 / Accepted: 19 March 2018 / Published: 21 March 2018

Abstract

:
There is a pressing need to map changes in forest structure from the earliest time period possible given forest management policies and accelerated disturbances from climate change. The availability of Landsat data from over four decades helps researchers study an ecologically meaningful length of time. Forest structure is most often mapped utilizing lidar data, however these data are prohibitively expensive and cover a narrow temporal window relative to the Landsat archive. Here we describe a technique to use the entire length of the Landsat archive from Multispectral Scanner to Operational Land Imager (M2O) to produce three novel outcomes: (1) we used the M2O dataset and standard change vector analysis methods to classify annual forest structure in northwestern Montana from 1972 to 2015, (2) we improved the accuracy of each yearly forest structure classification by applying temporal continuity rules to the whole time series, with final accuracies ranging from 97% to 68% respectively for two and six-category classifications, and (3) we demonstrated the importance of pre-1984 Landsat data for long-term change studies. As the Landsat program continues to acquire Earth imagery into the foreseeable future, time series analyses that aid in classifying forest structure accurately will be key to the success of any land management changes in the future.

1. Introduction

Land managers, foresters, wildlife biologists, and other environmental scientists all need inexpensive and easily accessible information characterizing current and past conditions in order to monitor changes in the age/size of forests under their purview [1,2,3,4,5]. Time series analysis tools such as harmonic regression [6], LandTrendr [7] and TimeSync [8], LandsatLinkr [9], season-integrated products [10], and whole time series analyses [11] have great utility for mapping historic and current forest structure. Managers can observe how forests have increased or decreased across structure classes within a landscape context. This can be especially important in multiple-use areas where different uses are often opposed to one another, for instance, on US Forest Service land where tree harvest and threatened or endangered species give management conflict. A time series analysis of forest structure can provide empirical evidence of loss or gain of different ages of forest, which sets the backdrop for discussion concerning current and future management policies.
Most successful forest structure mapping has been performed utilizing Light Detection and Ranging (lidar) data [12,13,14,15,16,17]. Lidar data are acquired by using pulsed laser measurements to provide a 3-dimensional point cloud from which precise measures of tree size and stand structure can be derived. However, lidar data are prohibitively expensive and difficult to process and store, especially for large areas. In addition, there is no long-term archive of lidar data with which to perform a time series analysis. This is where the utility of easily accessible Landsat data, and more specifically, the temporal length of the entire Landsat archive dataset, surpasses other datasets.
The Landsat archive is a priceless Earth data repository that is increasingly used for long-term, time-series-based change analyses of our environment, especially since the United States Geological Survey (USGS) began providing the data free of charge (1 October 2008; https://landsat.gsfc.nasa.gov/landsat-7) [18,19]. Most recent classification-based time series studies, however, have not used the entire Landsat archive length; rather they have utilized annual data starting with the Thematic Mapper (TM) sensor through the Enhanced Thematic Mapper Plus (ETM+) and the Operational Land Imager (OLI) sensors that are currently in orbit [20,21,22,23,24]. Although providing useful insights on temporal changes in our environment, these previous studies have not taken advantage of an additional 12 years of Multispectral Scanner (MSS) data from 1972 to 1984. Some studies have taken advantage of the entire Landsat data archive (MSS to OLI or M2O) [12,25], but these analyzed continuous variable data rather than classified thematic data and utilized the M2O time series to inform a single date analysis. Forest-based studies in the United States could greatly benefit from analysis of 1970s data because the National Forest Management Act of 1976 had a profound effect on how federal forests are managed in the United States. We acknowledge that the MSS data have some weaknesses—including, fewer bands and narrow spectral resolution, coarser spatial resolution, lower radiometric resolution, and large gaps in data—but we believe these data contain considerable amounts of information that can be incorporated in time series change analyses, and as such should be exploited when reasonably possible.
A substantial amount of image pre-processing needs to be performed on each scene in order to produce an M2O time series for analysis. LandsatLinkr, a program developed by the Laboratory for Applications of Remote Sensing in Ecology at the USDA Forest Service, PNW Research Station, and Oregon State University, Department of Forest Ecosystems and Society, provides an efficient and automated system “for processing large volumes of Landsat imagery and building long spectrally consistent chronologies across MSS, TM, ETM+, and OLI sensors” [9]. This system not only automates image pre-processing, but also spectrally calibrates MSS images to TM images and OLI images to ETM+ images so that spectral indices can be compared among the four image sensor types.
Our objective was to use the M2O dataset to accurately classify forest structure in a diverse forest landscape in northwestern Montana. Only Landsat imagery and several ancillary data layers were utilized to classify forest structure, reducing the overall costs as compared to a lidar-based study. We observed changes in annual forest structure from 1972 to 2015 and used the temporal information within the time series to improve upon each year of the structure classifications. Additionally, trends in growth and recovery from changes such as harvest, fire, and pests were examined. Finally, we demonstrated the importance of utilizing pre-1984 Landsat data (i.e., MSS) for long-term change studies.

2. Materials and Methods

2.1. Study Area

Our study area covers approximately 3.6 million ha in northwestern Montana (Figure 1). It is a mountainous region ranging from 549 to 3188 m in elevation with a variety of alpine, open, and mixed forest types. These complex forests have gone through a variety of changes since the early 1970s, including from natural events such as fire and insect infestations and from human influences such as tree harvests. The majority of the study area is covered by six Landsat WRS-1 scenes (MSS) and four WRS-2 scenes (TM, ETM+, OLI) (Figure 1).

2.2. Data Collection

2.2.1. Landsat Imagery

We downloaded 684 Landsat images including MSS (1972–1984) (230), TM (1984–2012) (252), ETM+ (1999–2015) (164), and OLI (2013–2015) (38) summer images (June through September) from the USGS (for the full list of images used in the production of each yearly composite image, see Supplementary Material, Table S1). Images with clouds were included, because the pre-processing procedures were intended to fill those gaps with surrounding data (either temporally or spatially within the annual season).

2.2.2. Forest Structure Reference Data

We visually classified forest structure at 674 field data sites to be used as reference data for training the initial classification. We used a segmented polygon layer characterizing forest stand that was developed using eCognition [26] to guide us to these locations so that we identified, at minimum, 100 training points per class. Three analysists independently interpreted each of the reference polygons using National Agriculture Imagery Program (NAIP) imagery collected in Montana during 2013 (when the field data sites were collected) along with imagery available in GoogleEarth. The interpreted results from each analyst were compared and the final class for each polygon was chosen as the class identified by two or more of the analysts. Pixel values were extracted from each of these polygons to serve as training data. We defined six structure classes as follows: Alpine (A), Open (O), Stand Initiation (S), Thin (T), Advanced Regeneration (R), and Mature (M). The categories were selected because they captured the gradient in distinct forest structural stages within our study area along with an understanding of wildlife-forest structure relationships in a parallel study [27]. Detailed definitions of these classes can be seen in Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes.
Using imagery available in GoogleEarth that covered the entire study area, we identified 50 random points per reference class for each of the years 2013, 2005, and 1995 to be used for independent validations of the predicted classification maps. In order to perform an accuracy assessment on an early classification, we purchased 1:16,000- to 1:40,000-scale historical aerial photographs from the US Department of Agriculture Aerial Photography Field Office that covered only the northwestern portion of the study area (no historic photos were available for the southern and northeastern portions) and identified a minimum of 50 random points per reference class for the 1975 classification.

2.2.3. Ancillary Data

Several ancillary data sets were acquired for analysis and validation. We acquired a 30-m digital elevation model (DEM) of the study area from the USGS National Elevation Dataset (ned.usgs.gov). We downloaded hydrology data from the USGS National Hydrology Dataset (NHD, nhd.usgs.gov) and used them to mask water bodies from the study area. Finally, we acquired four-band, one-meter-pixel NAIP images from 2013 from the Montana State Library Natural Resource Information System (nris.mt.gov).

2.3. Landsat Data Processing through LandsatLinkr

We processed the imagery using R statistical software and the LandsatLinkr (LLR v.0.2.0 (beta)) package [9]. The image data that LLR produces are annual cloud-free composites for Tasseled Cap spectral indices [28,29] that are comparable across sensors. Within LLR, TM, and ETM+, surface reflectance data are considered the spatial and spectral standard to which MSS and OLI data are modeled using multiple linear regression based on the relationship between sets of coincident and near-date image pairs between MSS and TM, and ETM+ and OLI. Prior to inter-sensor harmonization, LLR resamples Landsat Pre-Collection Level-1 MSS data to 30-m (nearest neighbor), improves georegistration [30], produces cloud masks [31], and calculates surface reflectance with the COST method [32], where the Lhaze parameter or “dark object value” is estimated from an automated implementation of the histogram method [33]. TM, ETM+ (Landsat Collection 1 Level-2), and OLI (Landsat 8 OLI/TIRS C1 Level-2) data required less pre-processing since they were delivered from USGS as surface reflectance (LEDAPS algorithm [34] and L8SR algorithm [35]) and included cloud masks (Fmask algorithm [36]). In the final step of annual compositing, for each year in the time series, all images that spatially intersected our study area were merged using the overlapping per-pixel mean while filling in where pixels were identified as cloud or cloud shadow in the masks. Where there were no unmasked pixels, a null value was assigned. In our case, there were several years that contained large regions of null data because of extensive, consistent cloud cover. These data were produced for each year from 1972 to 2015.
Unpublished data generated during LLR development found that Pearson’s correlation (r) between predicted (MSS and OLI) and observed (TM and ETM+) annual tasseled cap index values ranged from 0.82 to 0.96 for a test scene (WRS-2 path 38 row 29—an area very similar in cover types to those included in this study). MSS tasseled cap wetness was about 10% lower in correlation than brightness and greenness, likely because MSS does not include a SWIR band, which is weighed heavily in tasseled cap wetness. OLI correlation to ETM+ was less than 0.9 for all three tasseled cap indices, likely because predictive models between ETM+ and OLI are based on near-date images, unlike MSS and TM, which use coincident images.

2.4. Ancillary Data Processing

We derived ancillary datasets from the DEM and NAIP imagery, including slope, aspect, and texture information for the study area. The image texture mean and minimum values were derived from the standard deviation of the first principal component of the 1-m NAIP data, then resampled to 30-m pixels for analysis [26].

2.5. Classification of Master Image

We executed a variety of forest structure classifications using several different classification methods (from the Caret package in R (version 3.4.1): http://topepo.github.io/caret/available-models.html). We then chose the classification methods with the highest relative overall accuracy to create each of the initial 6-class to 2-class predictions for 2013 (see Section 2.5.1).

2.5.1 Accuracy Assessment of the Initial 2013 Classification

We randomly selected 1000 pixels, began determining the reference class of each sample pixel, and continued this selection protocol sequentially until exactly 50 pixels were selected in each reference class (except for the Open class—very few random points fell within Open areas, so we used only 36 points for this class in the initial 2013 classifications). Additional pixels (of the 1000) were discarded once the class had reached its quota of 50. We did not conduct an analysis of “from-to” change accuracy because of the absence of reference data for such an analysis. Instead, we followed a newer approach to minimize error propagation in analysis of “from-to” change [37] in terms of its summary of best practices for validation. A weighted error matrix [37] was then calculated for five levels of the initial 2013 classification (Figure 2), however, we estimated the proportion of area for each class based on the reference classification rather than the map classification since our validation data were stratified by reference class. User’s and producer’s accuracies along with the standard error were calculated for each level of classification as well [37].

2.6. Change Vector Analysis

Open and Stand Initiation were initially difficult to classify separately with sufficient accuracy, so we performed the change analysis using the 2013 5-class prediction as the master map (with O and S combined as one class). We utilized a change vector analysis (CVA) process [38] to classify each year of imagery compared to the master 2013 map. Advantages of using CVA for change analysis are that the process reduces compound error by not reclassifying unchanged locations and it provides reliable training data for years when data are unavailable or difficult to obtain. Compound error in locations that have potentially changed is a concern with this process; however, we can avoid high rates of compound error by utilizing CVA combined with temporal continuity rules applied to the resulting time series (see Section 2.7). CVA was applied to the three tasseled cap components, brightness, greenness, and wetness, to produce a change map for each year as follows:
Δ Magnitude = ( B 1 B 2   ) 2 + ( G 1 G 2 ) 2 + ( W 1 W 2   ) 2 ,
where 1,2 refer to 2013 (the master image) and each other yearly image respectively, and B = brightness, G = greenness, and W = wetness.
A threshold of possible change was assigned to each yearly change map that attempted to insure that all possible changes were classified during the process (on average, approximately 44% of the pixels in each change image were identified as possibly changed). Using C5.0 in R (as it was consistently the best method during the agnostic classification processes (Section 2.5)) and the Tasseled Cap components, elevation, slope, and aspect as predictor variables, only those pixels that fell within the threshold of possible change were reclassified and a random selection of 10,000 pixels (2000 per class) that were within the threshold of unlikely to have changed were used as the training data.

2.7. Informing Yearly Classifications with Time Series

We utilized all 44 classified maps to help improve the accuracy of each yearly classification by (1) filling data gaps and (2) using knowledge-based rules to correct for temporal inconsistencies. This process also enabled us to separate Open from Stand Initiation for all of the yearly classifications. This was accomplished through applying several consecutive temporal continuity rules—based on local, ecosystem, and successional knowledge, and expected change behaviors—to the entire time series as follows (code/logic used for each rule can be seen in Supplementary Material: Logic S1: Logic Used for Time-Series-Informed Rules; examples of rules 1 through 6 are shown in Figure 3):
  • Gap fill #1 (where no data existed in a year—due to clouds, scan line gaps, and/or missing scenes for that year): utilized data from surrounding years by filling with the class of the previous year if it was not Unknown, or with the following year if the class of the previous year was Unknown; remained Unknown if the classes of the surrounding years were Unknown;
  • Continuity fill: looked for consistent Alpine, Mature, and Open/Stand Initiation (i.e., 1972 = A, 2015 = A, and the majority of the time series at this pixel = A, then fill this pixel with A);
  • Smoothing: looked for single occurrences within the time series (i.e., if 1973 = Thin, 1974 = Open/Stand Initiation, and 1975 = Thin, fill the 1974 pixel with Thin);
  • Separate Open from Stand Initiation: based on whether previous years were Mature, Advanced Regeneration, or Thin (we expect that Open would not follow Mature, Advanced Regeneration, or Thin directly, while Stand Initiation can follow them directly); does not include any changes to 1972;
  • Gap fill #2: utilized data from surrounding years by filling with the class of the previous year if it was not Unknown, or with the following year if the class of the previous year was Unknown; remained Unknown if the classes of the surrounding years were Unknown;
  • Separate Open from Stand Initiation for the 1972 classification: based on 1973 values;
  • Alpine by elevation (any Open pixel ≥ 2500 m was labeled as Alpine);
  • Mask low elevation to exclude areas of non-forest (exclude all pixels <500 m).
Resulting in yearly maps from 1972 to 2015, each classified into six forest structure classes as defined in Section 2.2.2.
We calculated trajectories of each of the six structure classes over the 44-year time period using the time-series-informed classifications. The trajectories displayed the percentage of the study area covered by each structure class. These trajectories allowed us to observe trends in regeneration, growth, and loss of forest cover over time within the study area.

2.8. Accuracy Assessment for Initial and Time-Series-Informed Classifications

Due to lack of adequate imagery for each year, we performed independent accuracy assessments on four years spread across the four decades (2013, 2005, 1995, and 1975). Validation data for both the initial and time-series-informed classifications were selected for these years in the same manner as described in Section 2.5.1, with the exception that slightly more than 50 points per class were identified for the 1975 classifications. A weighted error matrix (see Section 2.5.1 [37]) was created for the 6-class predictions to demonstrate the overall accuracies at the most detailed level. The error matrices for the 5-class through 2-class predictions were created by combining classes from the 6-class error matrices. Eight combinations of classes were tested (Figure 4). We compared the accuracies of the initial classifications to the time-series-informed classifications to evaluate the influence of the continuity rules on the time series.

2.9. Comparison to US Forest Service Forest Inventory and Analysis Data

We independently evaluated our final forest structure classification for 2013 using field data collected as part of the US Forest Service Forest Inventory and Analysis (FIA) program during 2006–2012. We spatially assigned each subplot (~170 m2 from FIA) the forest structure class that was mapped for that pixel during 2013. We assessed differences in FIA-derived forest metrics across forest structure classes by implementing a multivariate analysis of variance (MANOVA) visually displaying boxplots (i.e., median and the interquartile range; IQR). Our structural metrics included basal area weighted diameter at breast height (DBH), derived canopy cover and tree height using the Forest Vegetation Simulator [39], as well as stem density for trees larger than 12.7 cm (i.e., 5 inches) in DBH. Basal area weighted DBH is calculated with Equation (2).
tree   Basal   Area DBH Total   Basal   Area ,
We expected to observe structural differences in our forest structure classes consistent with the aforementioned forest structure class definitions (Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes.).

3. Results

3.1. Classification of Master Image

The best 6-class prediction for the 2013 master image had an overall accuracy of 68% and a standard error of 3%, while the 2-class prediction had an overall accuracy of 98% with a standard error of <1% (Table 1). We found that structure classes Open and Stand Initiation were particularly difficult to distinguish as evidenced by the producer’s and user’s accuracies (Table 2, 6-Class) so we combined these classes to improve accuracy (Table 2, 5-Class). Structure classes of Thin, Advanced Regeneration, and Mature had some error as well (Table 2). Complete error matrices for all six initial 2013 classifications in Table 1 can be seen in Supplementary Material, Tables S2–S7.

3.2. Yearly Forest Structure Classifications

We identified possible change pixels for 43 years of imagery using the 2013 image as the master in 43 CVA processes. On average, approximately 44% of the pixels in each change image were identified as possibly changed (these numbers include large data gaps in the original imagery where, for a particular year, one or more scenes were cloud-covered and thus not included in the analysis). The threshold of change values over-predicted possible change, but we chose to insure that all possible change pixels would be reclassified during the CVA process. Thus, we ended up reclassifying many possibly unchanged pixels, however, nearly all unchanged pixels were likely reclassified to their original 2013 class.
We produced weighted error matrices for the initial classifications of 2005, 1995, and 1975 (complete error matrices can be seen in Supplementary Material, Tables S8–S25). The overall accuracy for each of the 5-class classifications (where Open and Stand Initiation were combined as one class) ranged from 55% for 2005, 58% for 1975, and 60% for 1995 (Table 1) (recall that the initial master 2013 classification had an overall accuracy of 73%). Overall accuracies for each of the 2-class classifications ranged from 97% for 1995 and 2005, and 99% for 1975 (Table 1).

3.3. Time-Series-Informed Yearly Classifications

After using the time series and temporal continuity rules to inform each of the yearly classifications, we produced weighted error matrices for every level of classification (eight levels shown in Figure 4) for the final 2013, 2005, 1995, and 1975 predictions (complete error matrices can be seen in Supplementary Material, Tables S26–S57). We observed increased overall accuracies and decreased or equivalent standard errors for most of the classifications (Table 3) and those that did not increase in overall accuracy decreased by less than 1%. Most importantly, the overall accuracy of the 6-class prediction for 2013 increased by 4% and the overall accuracies for the 6-class classifications for 2005 and 1995 were higher than the initial 5-class overall accuracies. The overall accuracy for the 1975 6-class classification was 43%, however, all of the 1975 4-class through 2-class overall accuracies increased from the initial classifications. Most of the errors in the 1975 6-class classification were (1) stand initiation being classified as open and (2) thin and advanced regeneration being classified as mature (see Supplementary Material, Tables S50–S57).
The temporal continuity rules applied to the time series enabled Open to be separated from Stand Initiation in the final 2013 classification resulting in better all-around user’s and producer’s accuracies for the two classes in the 6-class prediction (Table 4). Thin and Advanced Regeneration are still somewhat confused with one another within the classifications (Table 4). However, the user’s and producer’s accuracies for the Thin, Advanced Regeneration, and Mature classes in the 6- and 5-class either remained the same or increased from the initial classifications (Table 2 and Table 4).
Our assessment of field-derived forest structural metrics across our forest structure classes included a sample of 500 FIA subplots distributed across our 2013 forest structure classes: Alpine (n = 14), Open (n = 71), Stand Initiation (n = 43), Thin (n = 79), Advanced Regeneration (n = 62), and Mature (n = 231). Our MANOVA indicated statistical differences among forest structure classes (F20,1928 = 11.33, p <0.001) and the patterns were intuitive and consistent (Figure 5). For instance, tree size, tree density, and tree heights were lower in our Alpine and Stand Initiation classes relative to all other classes (Figure 5). This was followed by our Open class, which exhibited slightly larger tree sizes, higher tree densities, and taller trees. Our Thin class exhibited larger tree sizes, higher tree densities, taller trees, and more canopy cover than our Open, Stand Initiation, and Alpine classes. Further, our Advanced Regeneration and Mature classes exhibited more trees, higher canopy cover, and taller trees than all other classes, and Mature was generally greater than Advanced Regeneration for all metrics.

4. Discussion

4.1. Time Series Analysis

The overall accuracies for the initial master 2013 classification ranged from 68% for the 6-class map to 98% for the 2-class map (Table 1). We observed an increase in the overall accuracy of the final 2013 6-class classification of 4% after applying the temporal continuity rules (Table 3). The final overall accuracies for all years of classifications that separated the Alpine class and separated the more closed-canopy forested classes (Advanced Regeneration and Mature) from sparse or unforested classes (Open, Stand Initiation, and Thin) were all above 85% (Table 3, 2- and 3-class classifications). The clear variation of accuracies from year to year is in part because change vector analysis is likely to have greater errors as time from the master year increases, resulting in larger numbers of changed pixels and therefore increased potential for compound error. Accuracies decreased markedly as we attempted classifications that separated the different closed- or near-closed-canopy forests from each other, especially for the older imagery. The 6-class classifications that separated classes without trees (Open vs. Stand Initiation) resulted in the lowest overall accuracies, and this was especially true with the MSS data. Open and Stand Initiation are characterized by similar vegetation amounts, with the primary differences attributable to species, i.e., Stand Initiation has seedling conifer species, where Open is composed of more grasses and/or shrubs. This type of distinction within a classification could benefit from the greater spectral and radiometric qualities of the later satellite systems. Also, the difficulty distinguishing Open from Stand Initiation might be a function of using only Tasseled Cap components for the classifications and change analyses and not incorporating the thermal data acquired by Landsat sensors into these analyses—the thermal data could have helped identify areas affected by stand-replacing-disturbance [40].
The trajectories that were created with these time series provide critical information to land managers and allow for trend analyses changes that occur over long periods of time (Figure 6). We observed trends in each of the six forest structure classes and compared them to historic knowledge and expected overall changes in the study area after disturbances. We expected to see forest regeneration after harvest or fire [41,42,43], for example, and these trajectories show a general decline in the amount of Mature trees over time along with a general increase in Stand Initiation and Advanced Regeneration trees (Figure 6), supporting the expectation from a pattern of forest harvesting and especially wildfires over the last decade within this study area. The Alpine class remains mostly the same over time, with some decrease shown from 1972 to 1984 (from the MSS data) (Figure 6). This slight decrease could be a function of increases in tree line [44,45,46]. Open also remains mostly the same with a larger decrease shown from 1972 to 1984 (Figure 6). The portions of these trajectories before 1984 (i.e., from MSS data) are all more irregular/noisier than the portions from 1984 forward (Figure 6). We believe the forthcoming enhancements and calibration of MSS data [47] will help to smooth the pre-1984 data and will subsequently allow the early classification trajectories to correspond more readily to the later trajectories. The higher variability in early years, however, might also be partially the result of changing forest management practices following the adoption of the National Forest Management Act in 1976.
In addition to overall trends, we can analyze portions of each class trajectory to observe large changes as they happen. For instance, from 2011 to 2013, we see an increase in the Mature class while at the same time we see a similarly rapid decrease in the Advanced Regeneration class and a smaller decrease in the Thin class (Figure 6). This could be a function of the successional processes of the forest where over a long period of time Advanced Regeneration develops into Mature, thus increasing the Mature class and decreasing the Advanced Regeneration class. On the other hand, the Stand Initiation class shows a small increase from 2011 to 2012, corresponding to the decrease of the Advanced Regeneration class at the same time. This could be explained by a large number of fires and harvests in 2011 and 2012 resulting in the loss of some forest cover that is replaced by the Stand Initiation class. While recorded drastic changes and the physical relationships among the forest structure classes can be logical explanations for the changes observed in the trajectories, we must also consider that there might be artifacts in the data that produce faulty information. For example, the final 2015 classification contains some errors (described below in Section 4.2) that are reflected in the last segment of these class trajectories.
Changes from one class to another can be observed with a Sankey diagram (Figure 7; decadal diagram in Supplementary Material, Figure S1). The Sankey diagram illustrates the state transition from one class to another over our 44-year study time period. The diagram shows nodes that represent classes and links between those nodes whose heights are proportional to the area of each class at a given period of time and the amount transitioned to the next period of time. Similar to the general decline of the Mature class trajectory (Figure 6), much Mature forest present in 1972 has been transformed nearly evenly to the three other forest types, with just slightly more transition to Stand Initiation, followed by Thin, and finally Advanced Regeneration (Figure 7). This could be due to a combination of clearcut harvests followed by succession and thinning either by harvest or disturbance events. Approximately one quarter of the area of the Open class present in 1972 has transformed to forested classes, spread somewhat evenly between Thin, Advanced Regeneration, and Mature. This could be the result of new forest encroachment as existing stand edges, or possible misclassification in 1972 of the Stand Initialization class, which over-predicted the amount of Open at that time. In general, we observed that over the 44-year study time period the region has become more forested and the forests are younger, less dense, and more uniform (based on our class definitions in Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes) (Figure 7).

4.2. Classification of Forest Structure

Forest structure maps often provide a picture of a forest for only one instance in time. From a management perspective, being able to identify forest structure over time using image analysis is of great importance. There is also a pressing need to spatially map changes in forest structure given forest management policies and accelerated disturbances from climate change.
This collection of 44 years of classified maps allows land managers to visualize changes to forest structure that have happened over decades (Figure 8). Users can observe successional trends in areas that have experienced gradual changes such as beetle infestations or drastic, stand-replacing disturbances such as harvest, or fire. Fire scars (red circles in Figure 8, 2007) and clear cuts (blue circle around checkerboard pattern in Figure 8, 2012) are the most obvious visual changes. Insect damage that affects trees slowly is more difficult (but not impossible) to observe as a gradual decrease of the Mature class over the course of the time series (as seen in the Mature trajectory, Figure 6 and Figure 7 and from 1972 to 2015 in black circles in Figure 8). Our maps show clear evidence of changes across the landscape over time.
We were unable to perform accuracy assessments on all 44 classified images (initial and final), but we made assumptions that the accuracies of all the final maps in the time series are similar to those of 2013, 2005, 1995, and 1975, especially after applying the temporal continuity rules. Of note is that several of the continuity rules rely on surrounding dates. The first and last dates in the time series, therefore, might be less reliable, because they cannot take advantage of these particular rules. An example can be seen in the 2015 results where Landsat 7 imagery was used (Figure 8). The effects of the scan line corrector failure remain evident, where in other years the continuity rules might correct many of these errors.
We demonstrated that remote sensing data that are freely available were sufficient to identify forest structure classes across diverse forested landscapes. The fact that field-derived FIA forest structural metrics data (Figure 5) followed expected patterns of forest characteristics across structure classes highlights the achievement that structure classes were ecologically meaningful. These strong relationships to one another reveal the legitimacy of the structure class definitions and also corroborate the accuracy assessment of the final 2013 forest structure map.

4.3. Change Vector Analysis and Temporal Continuity Rules

CVA is advantageous for mapping change across a long time period, because areas of no change are not reclassified repeatedly, not only saving time, but also avoiding compound error in those places that have not experienced change throughout the time series [48]. We over-predicted the potential change in each yearly image by choosing thresholds of change that included, on average, 44% of the pixels in the study area. However, we can be confident that the unchanged pixels used for training the changed pixels were, indeed, accurately classified, and those unchanged pixels that were included within the threshold of change were likely reclassified to their original class.
Looking at changes in forest structure over time allows us to improve each yearly classification using logic and a priori knowledge to fix minor errors within the time series. We applied the temporal continuity rules that we developed for our study area to the entirety of the time series in order to improve upon the initial accuracies of the forest structure classes. Through this process, we were able to improve our ability to separate Open from Stand Initiation and increased the overall accuracy of the master 2013 6-class prediction (initial overall accuracy—68%; final overall accuracy—72%) (Table 1 and Table 3) The overall accuracies for the 6-class predictions for 2005, 1995, and 1975 were not very high relative to the 2013 prediction (Table 1 and Table 3). The relatively lower accuracies for the maps created through the CVA process might be a result of the lower spatial resolution of the images used to identify the validation points. The 2013 validation points were collected from 1-m NAIP imagery within GoogleEarth, while the 2005 and 1995 validation points were collected from 30-m (Landsat) imagery within GoogleEarth (with comparisons to more recent 1-m NAIP images), and the 1975 validation points were collected from historic aerial photographs at 1:16,000- to 1:40,000-scale.
The LandsatLinkr (LLR) program was invaluable for this project. The processes through which LLR calibrated MSS data to TM data and OLI data to ETM+ data aided in harmonizing the four different types of imagery to be comparable across all 44 years, created a Tasseled Cap Wetness component for the MSS data, and allowed us to use all three Tasseled Cap components for the CVA. Additionally, without the automated methods of LLR, the pre-processing time would have increased the time it took to perform all of the steps in preparing each yearly image for classification through the change vector analysis process by orders of magnitude.

5. Conclusions

There are three novel outcomes of this study. First, we used standard CVA methods to classify forest structure across a diverse forest landscape from 1972 to 2015 using only M2O and terrain data, explicitly distinguishing forest structure classes for single dates. Second, we were able to apply temporal continuity rules to the time series itself to improve the accuracy of each individual yearly forest structure classification. Finally, we demonstrated that the MSS data (pre-1984) are important within these time series and believe that the USGS goal of calibration and radiometric enhancements of MSS data is timely and very valuable. We argue that there are significant advantages to using the MSS data even if it is lower quality than the TM data and beyond. Using MSS data within long-term time series studies can add critical temporal information with which to learn about our landscapes and how they are changing over time.
The process of using information from the time series to inform each year of any classification can be improved upon by refining the rulesets to each specific classification, paying particular attention to the first and last years of the time series. Additionally, performing accuracy assessment after each ruleset could perhaps identify rules that need adjusting. Finally, incorporating spatial rulesets along with the temporal rulesets might smooth out more “salt and pepper” pixels and clean edges where gradual changes are happening, but should be used with caution, as such smoothing can easily result in the loss of real information.
We classified forest structure in this manner, however, other forest and land use classifications could benefit from this process, particularly in cases where land management practices have changed within the time period of the M2O. Forest managers can utilize this process to estimate the amount of increase or decrease of harvestable trees. Wildlife biologists can utilize this process to estimate the amount of viable habitat that has been lost or gained. Land managers can utilize this process to provide evidence of effects of management practices. We should be using the entire length of the M2O dataset and this process is one way to do that. The Landsat program will continue to acquire Earth imagery into the foreseeable future. Time series analyses that can be performed with these data will only grow in importance for land managers as land management practices adapt and change according to current polices. Indeed, Landsat time series might be used to address additional important issues that cannot be addressed with yearly estimates. The accurate classification of images within these time series can be a key to the success of future land management policies.

Supplementary Materials

The following are available online at https://www.mdpi.com/1999-4907/9/4/157/s1, Table S1: Every Landsat image used in the production of yearly composite images for our study area; Dictionary S1: Definitions of Forest Structure Classes; Table S2: Full error matrix of estimated area proportions for the INITIAL 6-class classification for 2013; Table S3: Full error matrix of estimated area proportions for the INITIAL 5-class classification for 2013 (Open and Stand Initiation combined as one class); Table S4: Full error matrix of estimated area proportions for the INITIAL 5-class classification for 2013 (Advanced Regeneration and Mature combined as one class); Table S5: Full error matrix of estimated area proportions for the INITIAL 4-class classification for 2013; Table S6: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 2013; Table S7: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 2013; Table S8: Full error matrix of estimated area proportions for the INITIAL 5-class classification for 2005 (Open and Stand Initiation combined as one class); Table S9: Full error matrix of estimated area proportions for the INITIAL 4-class classification for 2005; Table S10: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 2005 (Open, Stand Initiation, and Thin combined as one class); Table S11: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 2005 (Thin, Advanced Regeneration, and Mature combined as one class); Table S12: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 2005 (Advanced Regeneration and Mature combined as one class versus everything else); Table S13: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 2005 (Alpine versus everything else); Table S14: Full error matrix of estimated area proportions for the INITIAL 5-class classification for 1995; Table S15: Full error matrix of estimated area proportions for the INITIAL 4-class classification for 1995; Table S16: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 1995 (Open, Stand Initiation, and Thin combined as one class); Table S17: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 1995 (Thin, Advanced Regeneration, and Mature combined as one class); Table S18: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 1995 (Advanced Regeneration and Mature combined as one class versus everything else); Table S19: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 1995 (Alpine versus everything else); Table S20: Full error matrix of estimated area proportions for the INITIAL 5-class classification for 1975; Table S21: Full error matrix of estimated area proportions for the INITIAL 4-class classification for 1975; Table S22: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 1975 (Open, Stand Initiation, and Thin combined as one class); Table S23: Full error matrix of estimated area proportions for the INITIAL 3-class classification for 1975 (Thin, Advanced Regeneration, and Mature combined as one class); Table S24: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 1975 (Advanced Regeneration and Mature combined as one class versus everything else); Table S25: Full error matrix of estimated area proportions for the INITIAL 2-class classification for 1975 (Alpine versus everything else); Table S26: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 6-class classification for 2013; Table S27: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 2013 (Open and Stand Initiation combined as one class); Table S28: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 2013 (Advanced Regeneration and Mature combined as one class); Table S29: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 4-class classification for 2013; Table S30: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 2013 (Open, Stand Initiation, and Thin combined as one class); Table S31: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 2013 (Thin, Advanced Regeneration, and Mature combined as one class); Table S32: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 2013 (Advanced Regeneration and Mature combined as one class versus everything else); Table S33: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 2013 (Alpine versus everything else); Table S34: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 6-class classification for 2005; Table S35: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 2005 (Open and Stand Initiation combined as one class); Table S36: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 2005 (Advanced Regeneration and Mature combined as one class); Table S37: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 4-class classification for 2005; Table S38: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 2005 (Open, Stand Initiation, and Thin combined as one class); Table S39: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 2005 (Thin, Advanced Regeneration, and Mature combined as one class); Table S40: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 2005 (Advanced Regeneration and Mature combined as one class versus everything else); Table S41: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 2005 (Alpine versus everything else); Table S42: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 6-class classification for 1995; Table S43: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 1995 (Open and Stand Initiation combined as one class); Table S44: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 1995 (Advanced Regeneration and Mature combined as one class); Table S45: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 4-class classification for 1995; Table S46: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 1995 (Open, Stand Initiation, and Thin combined as one class); Table S47: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 1995 (Thin, Advanced Regeneration, and Mature combined as one class); Table S48: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 1995 (Advanced Regeneration and Mature combined as one class versus everything else); Table S49: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 1995 (Alpine versus everything else); Table S50: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 6-class classification for 1975; Table S51: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 1975 (Open and Stand Initiation combined as one class); Table S52: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 5-class classification for 1975 (Advanced Regeneration and Mature combined as one class); Table S53: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 4-class classification for 1975; Table S54: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 1975 (Open, Stand Initiation, and Thin combined as one class); Table S55: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 3-class classification for 1975 (Thin, Advanced Regeneration, and Mature combined as one class); Table S56: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 1975 (Advanced Regeneration and Mature combined as one class versus everything else); Table S57: Full error matrix of estimated area proportions for the FINAL (time-series-informed) 2-class classification for 1975 (Alpine versus everything else); Figure S1: Decadal Sankey diagram of the state transition from one class to another of the six forest structure classes from 1972 to 1980, 1980 to 1990, 1990 to 2000, 2000 to 2010, and 2010 to 2015; Logic S1: Logic used for time-series-informed-rules.

Acknowledgments

This work was supported by USDA Forest Service, Joint Venture Agreement 12-CS-11221635-176 and many scientists at the Rocky Mountain Research Station in Missoula, Montana. We thank S. Brown for his advice on NAIP texture analysis and R. Bush and B. Reyes for their help with FIA data.

Author Contributions

R.L.L., J.R.S., and S.L.S. conceived, designed, and coordinated the research; S.L.S. performed the research; S.L.S. and J.D.H. analyzed the data; L.E.O. contributed to data collection and organization; J.D.B. and W.B.C. conceived and designed LandsatLinkr and assisted S.L.S. in the application of LandsatLinkr to the research. S.L.S. led the writing of the paper, with assistance from all co-authors.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Cazcarra-Bes, V.; Tello-Alonso, M.; Fischer, R.; Heym, M.; Papathanassiou, K. Monitoring of forest structure dynamics by means of L-band SAR tomography. Remote Sens. 2017, 9, 1229. [Google Scholar] [CrossRef]
  2. DeVries, B.; Verbesselt, J.; Kooistra, L.; Herold, M. Robust monitoring of small-scale forest disturbance in a tropical montane forest using Landsat time series. Remote Sens. Environ. 2015, 161, 107–120. [Google Scholar] [CrossRef]
  3. Hill, R.A.; Hinsley, S.A. Airborne lidar for woodland habitat quality monitoring: Exploring the significance of lidar data characgteristics when modelling organism-habitat relationships. Remote Sens. 2015, 7, 3446–3466. [Google Scholar] [CrossRef]
  4. Lefsky, M.A.; Cohen, W.B.; Spies, T.A. An evaluation of alternate remote sensing products for forest inventory, monitoring, and mapping of Douglas-fir forests in western Oregon. Can. J. For. Res. 2000, 31, 78–87. [Google Scholar] [CrossRef]
  5. Noss, R.F. Assessing and monitoring forest biodiversity: A suggested framework and indicators. For. Ecol. Manag. 1999, 115, 135–146. [Google Scholar] [CrossRef]
  6. Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr—Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef]
  7. Cohen, W.B.; Yang, Z.; Kennedy, R. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 2. TimeSync—Tools for calibration and validation. Remote Sens. Environ. 2010, 114, 2911–2924. [Google Scholar] [CrossRef]
  8. Brooks, E.B.; Coulston, J.W.; Wynne, R.H.; Thomas, V.A. Improving the precision of dynamic forest parameter estimates using Landsat. Remote Sens. Environ. 2010, 179, 162–169. [Google Scholar] [CrossRef]
  9. Braaten, J.D.; Cohen, W.B.; Yang, Z. LandsatLinkr. Zenodo 2017. [Google Scholar] [CrossRef]
  10. Pasquarella, V.J.; Bradley, B.A.; Woodcock, C.E. Near-Real-Time Monitoring of Insect Defoliation Using Landsat Time Series. Forests 2017, 8, 275. [Google Scholar] [CrossRef]
  11. Bunker, B.E.; Tullis, J.A.; Cothren, J.D.; Casana, J.; Aly, M.H. Object-based Dimensionality Reduction in Land Surface Phenology Classification. AIMS Geosci. 2016, 2, 302–328. [Google Scholar] [CrossRef]
  12. Pflugmacher, D.; Cohen, W.B.; Kennedy, R.E.; Yang, Z. Using Landsat-derived disturbance and recovery history and lidar to map forest biomass dynamics. Remote Sens. Environ. 2014, 151, 124–137. [Google Scholar] [CrossRef]
  13. Hyde, P.; Dubayah, R.; Peterson, B.; Blair, J.B.; Hofton, M.; Hunsaker, C.; Knox, R.; Walker, W. Mapping forest structure for wildlife habitat analysis using waveform lidar: Validation of montane ecosystems. Remote Sens. Environ. 2005, 96, 427–437. [Google Scholar] [CrossRef]
  14. Hyde, P.; Dubyah, R.; Walker, W.; Blair, J.B.; Hofton, M.; Hunsaker, C. Mapping forest structure for wildlife habitat analysis using multi-sensor (LiDAR, SAR/InSAR, ETM+, Quickbird) synergy. Remote Sens. Environ. 2006, 102, 63–73. [Google Scholar] [CrossRef]
  15. Lim, K.; Treitz, P.; Wulder, M.; St-Onge, B.; Flood, M. LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 2003, 27, 88–106. [Google Scholar] [CrossRef]
  16. Zald, H.S.; Wulder, M.A.; White, J.C.; Hilker, T.; Hermosilla, T.; Hobart, G.W.; Coops, N.C. Integrating Landsat pixel composite and change metrics with lidar plots to predictively map forest structure and aboveground biomass in Saskatchewan, Canada. Remote Sens. Environ. 2016, 176, 188–201. [Google Scholar] [CrossRef]
  17. Zimble, D.A.; Evans, D.L.; Carlson, G.C.; Parker, R.C.; Grado, S.C.; Gerard, P.D. Characterizing vertical forest structure using small-footprint airborne LiDAR. Remote Sens. Environ. 2003, 87, 171–182. [Google Scholar] [CrossRef]
  18. Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Opening the archive: How free data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ. 2012, 122, 2–10. [Google Scholar] [CrossRef]
  19. Wulder, M.A.; White, J.C.; Loveland, T.R.; Woodcock, C.E.; Belward, A.S.; Cohen, W.B.; Fosnight, E.A.; Shaw, J.; Masek, J.G.; Roy, D.P. The global Landsat archive: Status, consolidation, and direction. Remote Sens. Environ. 2016, 185, 271–283. [Google Scholar] [CrossRef]
  20. Ahmed, O.S.; Wulder, M.A.; White, J.C.; Hermosilla, T.; Coops, N.C.; Franklin, S.E. Classification of annual non-stand replacing boreal forest change in Canada using Landsat time series: A case study in northern Ontario. Remote Sens. Lett. 2017, 8, 29–37. [Google Scholar] [CrossRef]
  21. Huang, C.; Goward, S.N.; Masek, J.G.; Thomas, N.; Zhu, Z.; Vogelmann, J.E. An automated approach for reconstructing recent forest disturbance history using dense Landsat time series stacks. Remote Sens. Environ. 2010, 114, 183–198. [Google Scholar] [CrossRef]
  22. Neigh, C.S.; Bolton, D.K.; Diabate, M.; Williams, J.J.; Carvalhais, N. An automated approach to map the history of forest disturbance from insect mortality and harvest with Landsat time-series data. Remote Sens. 2014, 6, 2782–2808. [Google Scholar] [CrossRef]
  23. Schroeder, T.A.; Wulder, M.A.; Healy, S.P.; Moisen, G.G. Mapping wildfire and clearcut harvest disturbances in boreal forests with Landsat time series data. Remote Sens. Environ. 2011, 115, 1421–1433. [Google Scholar] [CrossRef]
  24. Zhao, F.R.; Meng, R.; Huang, C.; Zhao, M.; Zhao, F.A.; Gong, P.; Yu, L.; Zhu, Z. Long-term post-disturbance forest recovery in the greater Yellowstone Ecosystem analyzed using Landsat time series stack. Remote Sens. 2016, 8, 898. [Google Scholar] [CrossRef]
  25. Pflugmacher, D.; Cohen, W.B.; Kennedy, R.E. Using Landsat-derived disturbance history (1972–2010) to predict current forest structure. Remote Sens. Environ. 2012, 122, 146–165. [Google Scholar] [CrossRef]
  26. Brown, S.; Barber, J. The Region 1 Existing Vegetation Mapping Program (VMap), Flathead National Forest Overview, version 12; Region One Vegetation Classification, Mapping, Inventory and Analysis Report 12–34; USDA Forest Service: Missoula, MT, USA, 2012. [Google Scholar]
  27. Kosterman, M.K.; Squires, J.R.; Holbrook, J.D.; Pletscher, D.H.; Hebblewhite, M. Forest structure provides the income for reproductive success in a southern population of Canada lynx. Ecol. Appl. 2018. [Google Scholar] [CrossRef] [PubMed]
  28. Crist, E.P.; Cicone, R.C. A physically-based transformation of Thematic Mapper data–The TM Tasseled Cap. IEEE Trans. Geosci. Remote 1984, 3, 256–263. [Google Scholar] [CrossRef]
  29. Crist, E.P. A TM tasseled cap equivalent transformation for reflectance factor data. Remote Sens. Environ. 1985, 17, 301–306. [Google Scholar] [CrossRef]
  30. Kennedy, R.F.; Cohen, W.B. Automated designations of tie-points for image-to-image coregistration. Int. J. Remote Sens. 2003, 24, 3467–3490. [Google Scholar] [CrossRef]
  31. Braaten, J.D.; Cohen, W.B.; Yang, Z. Automated cloud and cloud shadow identification in Landsat MSS imagery for temperate ecosystems. Remote Sens. Environ. 2015, 169, 128–138. [Google Scholar] [CrossRef]
  32. Chavez, P.S. Image-based atmospheric corrections–revisited and improved. Photogramm. Eng. Remote Sens. 1996, 62, 1025–1035. [Google Scholar] [CrossRef]
  33. Chavez, P.S., Jr. An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data. Remote Sens. Environ. 1988, 24, 459–479. [Google Scholar] [CrossRef]
  34. Masek, J.G.; Vermote, E.F.; Saleous, N.E.; Wolfe, R.; Hall, F.G.; Huemmrich, K.F.; Gao, F.; Kutler, J.; Lim, T.K. A Landsat surface reflectance dataset for North America, 1990–2000. IEEE Geosci. Remote Sens. 2006, 3, 68–72. [Google Scholar] [CrossRef]
  35. Vermote, E.; Justice, C.; Claverie, M.; Franch, B. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sens. Environ. 2016, 185, 46–56. [Google Scholar] [CrossRef]
  36. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  37. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  38. Lambin, E.F.; Strahler, A.H. Change-vector analysis in multitemporal space: A tool to detect and categorize land-cover change processes using high temporal-resolution satellite data. Remote Sens. Environ. 1994, 48, 231–244. [Google Scholar] [CrossRef]
  39. Crookston, N.; Dixon, G. The forest vegetation simulator: A review of its structure, content, and applications. Comput. Electron. Agric. 2005, 49, 60–80. [Google Scholar] [CrossRef]
  40. Hais, M.; Kuĉera, T. Surface temperature change of spruce forest as a result of bark beetle attach: Remote sensing and GIS approach. Eur. J. For. Res. 2008, 127, 327–336. [Google Scholar] [CrossRef]
  41. Oliver, C.D. Forest development in North America following major disturbances. For. Ecol. Manag. 1980, 3, 153–168. [Google Scholar] [CrossRef]
  42. Shatford, J.P.A.; Hibbs, D.E.; Puettmann, K.J. Conifer regeneration after forest fire in the Klamath-Siskiyous: How much, how soon? J. For. 2007, 105, 139–146. [Google Scholar] [CrossRef]
  43. Swanson, M.E.; Franklin, J.F.; Beschta, R.L.; Crisafulli, C.M.; DellaSala, D.A.; Hutto, R.L.; Lindenmayer, D.B.; Swanson, F.J. The forgotten stage of forest succession: Early-successional ecosystems on forest sites. Front. Ecol. Environ. 2011, 9, 117–125. [Google Scholar] [CrossRef]
  44. Carlson, B.Z.; Georges, D.; Rabatel, A.; Randin, C.F.; Renaud, J.; Delestrade, A.; Zimmermann, N.E.; Choler, P.; Thuiller, W. Accounting for tree line shift, glacier retreat and primary succession in mountain plant distribution models. Divers. Distrib. 2014, 20, 1379–1391. [Google Scholar] [CrossRef]
  45. Gehrig-Fasel, J.; Guisan, A.; Zimmerman, N.E. Tree line shifts in the Swiss Alps: Climate change or land abandonment? J. Veg. Sci. 2007, 18, 571–582. [Google Scholar] [CrossRef]
  46. Grace, J.; Berninger, F.; Nagy, L. Impacts of climate change on the tree line. Ann. Bot. 2002, 90, 537–544. [Google Scholar] [CrossRef] [PubMed]
  47. Sauer, B.; Morfitt, R.; Dwyer, J. Landsat Product Improvement, Collection Processing and Management Update, MSS and No PCD Data, Surface Reflectance Status, U.S. Analysis Ready Data and Global Considerations. In Proceedings of the Landsat Science Team Meeting, Boston, MA, USA, 10–12 January 2017; Boston University: Boston, MA, USA, 2017. [Google Scholar]
  48. Savage, S.L.; Lawrence, R.L. Vegetation dynamics in Yellowstone’s Northern Range: 1985 to 1999. Photogramm. Eng. Remote Sens. 2010, 76, 547–556. [Google Scholar] [CrossRef]
Figure 1. Location map for the study area within Montana, USA. Boundaries of WRS-1 (MSS) scenes and WRS-2 (TM, ETM+, and OLI) scenes are shown over the study area. Scene boundaries are labeled as path/row.
Figure 1. Location map for the study area within Montana, USA. Boundaries of WRS-1 (MSS) scenes and WRS-2 (TM, ETM+, and OLI) scenes are shown over the study area. Scene boundaries are labeled as path/row.
Forests 09 00157 g001
Figure 2. Different levels of the initial master 2013 classification from 2 classes to 6 classes. Only the initial 2013 classification had 6 classes—the initial classification of every other yearly classification started with 5 classes because Open and Stand Initiation were too similar to separate at this stage of the study. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. See Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes for detailed definitions of each class. Each level was produced with unique combinations of binary classifications (see Supplementary Material, Tables S2–S7 for further details).
Figure 2. Different levels of the initial master 2013 classification from 2 classes to 6 classes. Only the initial 2013 classification had 6 classes—the initial classification of every other yearly classification started with 5 classes because Open and Stand Initiation were too similar to separate at this stage of the study. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. See Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes for detailed definitions of each class. Each level was produced with unique combinations of binary classifications (see Supplementary Material, Tables S2–S7 for further details).
Forests 09 00157 g002
Figure 3. Examples of temporal continuity rules applied to a variety of 15-year time series. There were eight temporal continuity rules applied to the time series in order to improve accuracy and separate difficult to distinguish classes such as Open versus Stand Initiation: (1) Gap fill #1, (2) Continuity fill, (3) Smoothing, (4) Separate Open from Stand Initiation, (5) Gap fill #2, (6) Separate Open from Stand Initiation Year 1, (7) Alpine based on elevation, and (8) Non-forest based on elevation (each rule is described in detail in Section 2.7 and code/logic used for each rule can be seen in Supplementary Material: Logic S1: Logic Used for Time-Series-Informed Rules).
Figure 3. Examples of temporal continuity rules applied to a variety of 15-year time series. There were eight temporal continuity rules applied to the time series in order to improve accuracy and separate difficult to distinguish classes such as Open versus Stand Initiation: (1) Gap fill #1, (2) Continuity fill, (3) Smoothing, (4) Separate Open from Stand Initiation, (5) Gap fill #2, (6) Separate Open from Stand Initiation Year 1, (7) Alpine based on elevation, and (8) Non-forest based on elevation (each rule is described in detail in Section 2.7 and code/logic used for each rule can be seen in Supplementary Material: Logic S1: Logic Used for Time-Series-Informed Rules).
Forests 09 00157 g003
Figure 4. Different levels of the final 1975, 1995, 2005, and 2013 classifications tested, from 2 classes to 6 classes. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. See Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes for detailed definitions of each class. Each subsequent level was created by combining classes from the 6-class prediction. Whereas the initial classifications for every year except 2013 had only 5 classes (Open and Stand Initiation were combined as one class), all of the final yearly predictions were classified into 6 classes. The 5-class through 2-class predictions included different combinations of classes (i.e., Thin combined with the open classes versus Thin combined with the forest classes in the 3-class levels) to allow for the most accurate results.
Figure 4. Different levels of the final 1975, 1995, 2005, and 2013 classifications tested, from 2 classes to 6 classes. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. See Supplementary Material: Dictionary S1: Definitions of Forest Structure Classes for detailed definitions of each class. Each subsequent level was created by combining classes from the 6-class prediction. Whereas the initial classifications for every year except 2013 had only 5 classes (Open and Stand Initiation were combined as one class), all of the final yearly predictions were classified into 6 classes. The 5-class through 2-class predictions included different combinations of classes (i.e., Thin combined with the open classes versus Thin combined with the forest classes in the 3-class levels) to allow for the most accurate results.
Forests 09 00157 g004
Figure 5. Median values of forest structural metrics from USDA Forest Inventory and Analysis program (FIA) across our six time-series-informed structural classes for the year 2013. Whiskers indicate the first and third quartile (i.e., the interquartile range). Values derived using the Forest Vegetation Simulator (FVS [39]) are indicated. We observed that tree density, tree size, and tree height follow a general pattern of increasing from non-forest to forested classes, i.e., all values for Thin, Advanced Regeneration, and Mature were higher than those for Alpine, Open, and Stand Initiation.
Figure 5. Median values of forest structural metrics from USDA Forest Inventory and Analysis program (FIA) across our six time-series-informed structural classes for the year 2013. Whiskers indicate the first and third quartile (i.e., the interquartile range). Values derived using the Forest Vegetation Simulator (FVS [39]) are indicated. We observed that tree density, tree size, and tree height follow a general pattern of increasing from non-forest to forested classes, i.e., all values for Thin, Advanced Regeneration, and Mature were higher than those for Alpine, Open, and Stand Initiation.
Forests 09 00157 g005
Figure 6. Time series trajectories of six forest structure classes from 1972 to 2015. Percentage of study area of each class for each year is shown on the Y-axis. For instance, Mature has lost area overall, starting with 42% of the study area in 1972 and ending with 35% of the study area in 2015. The transition from MSS to TM data is shown at the 1984 mark.
Figure 6. Time series trajectories of six forest structure classes from 1972 to 2015. Percentage of study area of each class for each year is shown on the Y-axis. For instance, Mature has lost area overall, starting with 42% of the study area in 1972 and ending with 35% of the study area in 2015. The transition from MSS to TM data is shown at the 1984 mark.
Forests 09 00157 g006
Figure 7. Sankey diagram of the state transition from one class to another of the six forest structure classes from 1972 to 2015. Y-axes show the percent of each class that covers the study area in 1972 and 2015, respectively. For instance, Mature has lost area overall, starting with 42% of the study area in 1972 and ending with 35% of the study area in 2015. If we combine the forested classes (Mature, Advanced Regeneration, and Thin, and Stand Initiation), we observe that there has been a slight increase of forest cover from 1972 to 2015, starting with 67% of the study area and ending with 74% of the study area.
Figure 7. Sankey diagram of the state transition from one class to another of the six forest structure classes from 1972 to 2015. Y-axes show the percent of each class that covers the study area in 1972 and 2015, respectively. For instance, Mature has lost area overall, starting with 42% of the study area in 1972 and ending with 35% of the study area in 2015. If we combine the forested classes (Mature, Advanced Regeneration, and Thin, and Stand Initiation), we observe that there has been a slight increase of forest cover from 1972 to 2015, starting with 67% of the study area and ending with 74% of the study area.
Forests 09 00157 g007
Figure 8. Changes in forest structure classification from 1972 to 2015 within the southern portion of our study area. Fire scars (red circles in 2007) and clear cuts (blue circle around checkerboard pattern in 2012) are the most obvious visual changes. Insect damage is observed as a gradual decrease of the Mature class over the course of the time series (as seen from 1972 to 2015 in black circles).
Figure 8. Changes in forest structure classification from 1972 to 2015 within the southern portion of our study area. Fire scars (red circles in 2007) and clear cuts (blue circle around checkerboard pattern in 2012) are the most obvious visual changes. Insect damage is observed as a gradual decrease of the Mature class over the course of the time series (as seen from 1972 to 2015 in black circles).
Forests 09 00157 g008
Table 1. Overall accuracies and standard errors (SE) of the initial 2013, 2005, 1995, and 1975 classifications. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. Only the 2013 classifications separated Open from Stand Initiation. The 5- and 6-class classification accuracies were notably lower than those of the classifications with fewer classes, likely because of the difficulty in distinguishing similar cover types from one another, i.e., Open versus Stand Initiation or Advanced Regeneration versus Mature. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices can be seen in Supplementary Material, Tables S2–S25.
Table 1. Overall accuracies and standard errors (SE) of the initial 2013, 2005, 1995, and 1975 classifications. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. Only the 2013 classifications separated Open from Stand Initiation. The 5- and 6-class classification accuracies were notably lower than those of the classifications with fewer classes, likely because of the difficulty in distinguishing similar cover types from one another, i.e., Open versus Stand Initiation or Advanced Regeneration versus Mature. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices can be seen in Supplementary Material, Tables S2–S25.
Overall Accuracy 2013 (%)SE 2013 (%)Overall Accuracy 2005 (%)SE 2005 (%)Overall Accuracy 1995 (%)SE 1995 (%)Overall Accuracy 1975 (%)SE 1975 (%)
A vs. O vs. S vs. T vs. R vs. M (6-class)683------------------
A vs. OS vs. T vs. R vs. M (5-class)732553603582
A vs. O vs. S vs. T vs. RM (5-class)782------------------
A vs. OS vs. T vs. RM (4-class)822782752702
A vs. OST vs. RM (3-class)------902822802
AOST vs. RM (2-class)------932872832
A vs. OS vs. TRM (3-class)911842872832
A vs. OSTRM (2-class)98<197197<199<1
Table 2. Producer’s and User’s accuracies for the initial 6-class 2013 classification and 5-class (Open and Stand Initiation combined as one class) 2013, 2005, 1995, and 1975 classifications. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. The Open class had the lowest producer’s accuracy for the 2015 6-class classification, while the Stand Initiation class had the lowest user’s accuracy. When these two classes were combined, their producer’s and user’s accuracies are notably higher. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices of all the final classifications can be seen in Supplementary Material, Tables S2–S7.
Table 2. Producer’s and User’s accuracies for the initial 6-class 2013 classification and 5-class (Open and Stand Initiation combined as one class) 2013, 2005, 1995, and 1975 classifications. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. The Open class had the lowest producer’s accuracy for the 2015 6-class classification, while the Stand Initiation class had the lowest user’s accuracy. When these two classes were combined, their producer’s and user’s accuracies are notably higher. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices of all the final classifications can be seen in Supplementary Material, Tables S2–S7.
Classification Class NameProducer’s Accuracy 2013 (%)User’s Accuracy 2013 (%)Producer’s Accuracy 2005 (%)User’s Accuracy 2005 (%)Producer’s Accuracy 1995 (%)User’s Accuracy 1995 (%)Producer’s Accuracy 1975 (%)User’s Accuracy 1975 (%)
6-classA9876------------------
O3667------------------
S8250------------------
T4998------------------
R5480------------------
M9463------------------
5-classA9676967896639694
OS9375826780887584
T4398325444532033
R548030422835812
M9461604558407433
Table 3. Overall accuracies and standard errors for final classifications of 2013, 2005, 1995, and 1975. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. We observed increased overall accuracies and decreased or equivalent standard errors for most of the time-series-informed classifications compared to the initial classifications (Table 1), and those that did not increase in overall accuracy decreased by less than 1%. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices can be seen in Supplementary Material, Tables S26– S57.
Table 3. Overall accuracies and standard errors for final classifications of 2013, 2005, 1995, and 1975. A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. We observed increased overall accuracies and decreased or equivalent standard errors for most of the time-series-informed classifications compared to the initial classifications (Table 1), and those that did not increase in overall accuracy decreased by less than 1%. Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices can be seen in Supplementary Material, Tables S26– S57.
Overall Accuracy 2013 (%)SE 2013 (%)Overall Accuracy 2005 (%)SE 2005 (%)Overall Accuracy 1995 (%)SE 1995 (%)Overall Accuracy 1975 (%)SE 1975 (%)
A vs. O vs. S vs. T vs. R vs. M (6-class)722623583433
A vs. OS vs. T vs. R vs. M (5-class)752653613573
A vs. O vs. S vs. T vs. RM (5-class)822802773632
A vs. OS vs. T vs. RM (4-class)852832802763
A vs. OST vs. RM (3-class)931912862872
AOST vs. RM (2-class)961942892882
A vs. OS vs. TRM (3-class)901852882882
A vs. OSTRM (2-class)97<197198<1100<1
Table 4. Producer’s and User’s accuracies for the final 6-class and 5-class (Open and Stand Initiation combined as one class) 2013, 2005, 1995, and 1975 classifications.
Table 4. Producer’s and User’s accuracies for the final 6-class and 5-class (Open and Stand Initiation combined as one class) 2013, 2005, 1995, and 1975 classifications.
ClassificationClass NameProducer’s Accuracy 2013 (%)User’s Accuracy 2013 (%)Producer’s Accuracy 2005 (%)User’s Accuracy 2005 (%)Producer’s Accuracy 1995 (%)User’s Accuracy 1995 (%)Producer’s Accuracy 1975 (%)User’s Accuracy 1975 (%)
6-classA98719679968196100
O9456825280618739
S72827280688000
T43100346646652050
R568028501850634
M9666724882479140
5-classA98719679968296100
OS9475886985768477
T43100346646652046
R568128501850635
M9666724882479140
A = Alpine, O = Open, S = Stand Initiation, T = Thin, R = Advanced Regeneration, M = Mature. Producer’s accuracy for Open and user’s accuracy for Stand Initiation have improved for the final 2013 classification compared to the initial classification (Table 2). We observe that Thin and Advanced Regeneration are difficult to distinguish from one another, while the producer’s and user’s accuracies for the Thin, Advanced Regeneration, and Mature classes have either remained the same or increased from the initial classifications (Table 2). Validation points for 2013, 2005, and 1995 were interpreted from GoogleEarth imagery. Validation points for 1975 were interpreted from digital aerial photographs. Full error matrices of all the final classifications can be seen in Supplementary Material, Tables S26–S57.

Share and Cite

MDPI and ACS Style

Savage, S.L.; Lawrence, R.L.; Squires, J.R.; Holbrook, J.D.; Olson, L.E.; Braaten, J.D.; Cohen, W.B. Shifts in Forest Structure in Northwest Montana from 1972 to 2015 Using the Landsat Archive from Multispectral Scanner to Operational Land Imager. Forests 2018, 9, 157. https://doi.org/10.3390/f9040157

AMA Style

Savage SL, Lawrence RL, Squires JR, Holbrook JD, Olson LE, Braaten JD, Cohen WB. Shifts in Forest Structure in Northwest Montana from 1972 to 2015 Using the Landsat Archive from Multispectral Scanner to Operational Land Imager. Forests. 2018; 9(4):157. https://doi.org/10.3390/f9040157

Chicago/Turabian Style

Savage, Shannon L., Rick L. Lawrence, John R. Squires, Joseph D. Holbrook, Lucretia E. Olson, Justin D. Braaten, and Warren B. Cohen. 2018. "Shifts in Forest Structure in Northwest Montana from 1972 to 2015 Using the Landsat Archive from Multispectral Scanner to Operational Land Imager" Forests 9, no. 4: 157. https://doi.org/10.3390/f9040157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop