Change Detection

3.7. Change Detection

The change detection metrics simplify interpretation and mapping of the land cover changes that are indicated by significant differences of land surface reflectance between the same seasons of corresponding (“current”) year and preceding year(s). Similarly to Land Cover Classification, we apply a decision tree model that assigns the likelihood (probability) of data pixel to represent land cover change based on the application of statistical decision rules in the multispectral/multi-temporal domain. The decision tree model is calibrated using a training population of pixels with assigned change and no-change classes. See Land Cover Classification for more information on decision trees and the active learning model implementation method.

 

Thematically similar land cover changes may be represented by different trajectories of surface reflectance change. For example, a forest disturbance that causes the removal of woody vegetation may be indicated by contrasting changes in shortwave infrared reflectance (SWIR): SWIR will increase after logging and decrease after forest fire. While the metrics are designed to allow detection of different indications of land cover changes, change detection model requires a set of training data that represents different spectral responses. To examine land cover dynamics and to create a comprehensive training dataset, data analyst is encouraged to visualize a combination of different metrics.

 

In the following section, we describe the application of the supervised decision tree classification tool that employs dependent variable (training data) in the form of vector polygon layers and independent variables in the form of change detection metrics. Provided tools only operate with two classes: a target class (that represent land cover change), and a background class (no change). The output layer shows the likelihood of each pixel to be assigned to the target class (in percent).


1. Creating image mosaics

The section Using Image Mosaics provides instructions for implementing mosaic_tiles.pl tool to stitch together tiled data and create multi-band image composites. Change detection interpretation may require the use of multiple representations of metric data using different spectral bands and statistics. Here we provide an example of a few band combinations that are used to interpret forest disturbances.

 

a. Current and preceding year composites

The following example parameter files for mosaic_tiles.pl are designed to create two separate composites: one that displays the average annual reflectance of the current year (year 2) and the other for the preceding year (year1) from the change_B metric dataset:

param_mosaic_year1.txt

param_mosaic_year2.txt

 

The outputs are SWIR-NIR-Red composites for two years.

Preceding year composite

Preceding year composite (year1)

Current year composite

Current year composite (year2)

 

b. Inter-annual band difference composite

The following example parameter file may be used to create a SWIR band difference image. It creates an RGB composite that displays SWIR band from the current year as the red band and SWIR band from the preceding year as green and blue bands.

 

param_mosaic_changeSWIR.txt

param_mosaic_changeSWIR

Note that not all changes in SWIR spectral reflectance represent land cover change.

 

c. A composite that highlights change between 16-day intervals

One of the most important features of the change detection metrics is the ability to highlight per-16-day composite (seasonal) changes in spectral reflectance. The following example creates a composite that displays highest seasonal change of SWIR band as the red band and average seasonal change of NIR/SWIR2 band ratio as green and blue bands.

 

param_mosaic_changeDIF.txt

param_mosaic_changeDIF

Note that not all changes in spectral reflectance represent land cover change.


2. Collecting training data

Training data represent two polygon shapefiles, one with areas marking training class pixels (“target”), and the other marking other pixels (“background”). Both shapefiles should be in the same coordinate system as metrics (+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs). The classification tool uses only the object shape data, all attributes ignored. The shapefiles may contain overlapping polygons. The correct topology is not required as long as data can be correctly rasterized. The polygons in “target” and “background” shapefiles may overlap. In case of overlap, the area under the “target” class polygons will be erased from the “background” layer.

 

The polygon layers may be created in any GIS software. The following manual demonstrates the use of QGIS 2.18 for shapefile editing. The following checklist summarizes the requirement for training data collection:

  • QGIS 2.18 with OpenLayers, Freehand Editing, Send2GE plugins.
  • Image mosaic (VRT or raster format) of selected metrics used for data visualization.
  • Two empty shapefiles in geographic coordinates WGS84.  Empty shapefiles may be downloaded here: https://glad.umd.edu/gladtools/Examples/target.zip and https://glad.umd.edu/gladtools/Examples/background.zip

To collect training data, follow the routine described below:

  • Create classification workspace. The workspace (folder) should include:
    • list of tiles (single column, tile names only – see example tiles.txt)
    • two shapefiles for training data
    • classification parameter file (see below)
  • Open QGIS (new project) and load required plugins.
  • Add raster layers (mosaics of selected metrics). Optionally: load Bing Maps layer using OpenLayers plugin.
  • Load target.shp and background.shp files. Put the target layer onto the top of the background layer in the Layer Panel.
  • Start editing (Toggle Editing button) for both shapefiles
  • Use “Add Polygon” or “Freehand Drawing” tools to add training samples. Avoid creating large training polygons. Distribute samples over the entire area of the image.

Sample drawing example:

Image composite

Image composite

Target training

Target training

Background training

Background training (overlaid with target training)

 
  • Save layers and project (periodically)

3. Applying classification

 

Before applying classification, check that all required software installed on your computer:

To apply classification, follow the routine described below:

  • Save all edits and close the QGIS project.
  • Edit the classification parameter file.

mettype=pheno_B

Metric type

metrics=D:/Metrics_change_B

Multi-temporal metrics source folder

dem=D:/DEM

Topography metrics source folder

year=2018

Year (for multi-temporal metrics)

target_shp=target.shp

Target class shapefile name

bkgr_shp=background.shp

Background class shapefile name

tilelist=tiles.txt

Name of the tile list file

outname=forest_loss_2018

Output file name (no spaces!)

mask=none

Mask file name (none – no mask)

maxtrees=21

Number of trees (odd number in the range 1-25)

sampling=10

Sampling rate (percent training data extracted for each tree)

mindev=0.0001

Tree pruning rule

threads=1

Number of parallel processes

treethreads=21

Number of parallel processes for a tree model

ogr=C:/Program Files/QGIS 2.18/OSGeo4w.bat

A link to OSGeo4w.bat file (check your local installation)

You may modify the parameter file depending on the computer capacity, training size, etc. Specifically:

- Increasing maxtrees parameter will slow classification but improve model generalization.

- Increasing mindev will reduce tree complexity, reducing will increase tree complexity.

- Reduce sampling parameter if sample areas are too large. Increase if maxtrees parameter is reduced.

- Reduce threads and treetherads parameters for a low capacity computer (minimal value 1)

  • Open cmd, navigate to the folder with tile list, and run the program:

> perl C:/GLAD_1.0/classification.pl param_change_B.txt

  • Wait for the process to complete.
  • Open QGIS and load the classification result (TIF file). To visualize target class, use transparency threshold 0-49. To show only background class, apply transparency to the interval 50-100.

model output

Example of model output (the output raster transparency is set for 0-49 interval). Blue areas represent pixels with change class likelihood of 50% and above.


4. Understanding classification outputs

See Land Cover Classification section for a detailed explanation.


5. Iterating classification

Due to high complexity of land cover and land cover change, the accuracy of classification that is based on a small subjectively selected training population is (usually) low. To improve the classification accuracy, we implement an active learning method. After obtaining the initial classification output, we evaluate it and add new training sites in areas where commission or omission errors are evident. To perform active learning iterative training, follow this routine:

  • Open the QGIS project and load classification results.
  • Start editing for training shapefiles.
  • Visually check the map (using both target and background class masks) and add training to shapefiles.
  • Save shapefiles and the project and close QGIS.
  • Perform classification. Classification results will be updated.

6. Hierarchical classification: using masks

See Land Cover Classification section for a detailed explanation. The masking works the same for change detection. The output of land cover classification may be used as a mask (i.e., to map forest changes only within the forest class).