Monday, May 7, 2018

Lab 7: Object-based Classification

Introduction
The purpose of this lab is to be introduced to the relativity new object-based classification scheme. This was done through the eCognition software. This software allows for remotely sensed images to be segmented into homogeneous spatial and spectral regions (objects), training samples to be taken from these regions to train random forest and support vector machines classifiers and execute the classificaion using the previously stated classifiers.

Methods
The first step in this lab was to become familiar with the eCognition software itself. An Erdas Imagine was brought into the software. The image was then given a 4,3,2 false color band combination to help delineate different land cover types. After displaying the image in a 4,3,2 band combination, the next step of the lab was segment the image. For this lab the multiresolution segmentation algorithm was used as it helps delineate different objects at different scales. The scale parameter was set at 9 and the shape and compactness weights were set at 0.3 and 0.5 respectively.
The segemation process was then run resulting in the following image (fig. 1).
Figure 1. Segmented image using the multiresolution segmentation algorthim with a scale factor of 9. 
 After the image was segmented, training samples were collected for the objects in image to train the classifiers that would be applied to image later. For this classification six land cover/land use (LULC) classes were created. They were agriculture, bare soil, forest, green vegition/shrub, urban and water. After reviewing the image and consulting google earth, different objects in the image were assigned different LULC classifications to be used as training samples for the classifiers (fig. 2). Each class was given at least 15 training samples throughout the image.
Figure 2. A segment of the image were a portion of the training samples for the LULC classes can be seen. 
Once all the training samples were collected, the next step was to create a random forest classifier to classify the image. The parameters for the randoom forest classification can be seen below (fig. 3). Features to be included in the classification were the Max Difference between each of the bands, the Max Difference, Gray-Level Co-Occurrence Matrix (GLCM) Dissimilarity, and the GLCM mean (fig. 4). All of this information was contained within a process tree to automize the process of classification (fig. 5).
Figure 3. Parameters for the random forest classification.

Figure 4. Features selected for the random forest classification.
Figure 5. Process Tree used to run the random forest classification.


Results
The process tree was then run createing a random forest LULC classification for the original image (fig. 6).
Figure 6. Random forest classified image for the Eau Claire/Chippewa Falls area.

The next section of the lab was to complete a support vector machine classification. This was done by using the same process tree but changing the classification type from random forest to support vector machines. Once this was completed the two different classified images were brought into ArcMap so that they could be compared side-by-side (fig. 7).

Figure 7. The random forest and support vector machine classified images
Following the classification of the Eau Claire/Chippewa Falls area, a similar process was used to classify a UAS image collected with the city of Eau Claire using the random forest algorithm (fig. 8).
Figure 8. UAS image classified using the random forest algorithm.

Sources
Landsat images are from the Earth Resources Observations and Science Center, United States Geological Survey
UAS image is from UWEC Geography & Anthropology UAS Center.


Wednesday, May 2, 2018

Lab 10: Radar Remote Sensing

Introduction
The purpose of this lab is to introduce students to radar remote sensing. This lab specifically aims to increase knowledge of noise reduction in radar images, spectral and spatial enhancement, multi-sensor fusion, texture analyse, polarimetric process and slant-range conversion. 

Methods
The first step of this lab was to reduce speckling of radar images. This was done in Erdas Imagine. For this section we used the Radar Speckle Suppression tool. This is first done by calculating the coeffecient of variation, which for this lab was 0.274552. This value was used to run the Radar Speckle Suppression.  This tool was run in 3 iterations with each iteration using different filters (fig. 1).  After the different iterations were run, the histograms of each of the outputs was examined (fig. 2).

Figure 1. The different parameters used to run the different iterations of the Radar Speckle Reduction tool.
Figure 2. Each of the histograms from the different iterations. The image on the left is the original image. As the more iterations were run the image became smoother.


After despeckling the image, the next process that needed to be run was the edge enhancement tool. This tool enhances the ability for an analyst to better delineate different surface features in an image (fig. 3) The Wallis Adaptive Filter tool was also run as part of image enhancement.

Following image enhancement, we were asked to perform a sensor merge. A sensor merge combines both radar imagery and landsat imagery to create a single image that has characteristics of sensors. This was done by using IHS principle component sensor merge. This replaces the RGB values of the landsat image with the greys-scale values from the radar image. This produces a composite image from both sensors (fig. 4).

Following the image merge we were asked to run a texture analysis on radar imagery using an image from Flevoland, Holland using the C-band with a 20 meter spatial resolution. The texture analyse created an image that allows for areas of similar texture characteristics to be visualized more easily (fig . 5).

Polarimetric SAR Processing was also performed. The image for the section of the lab was taken over the northern section of Death Valley. This was completed by using band synthesis using 4 different polarization combinations. There were also four different stretch methods applied to the imagery, Gaussian, Linear, and Square root. Of the three mentioned methods the Gaussian method produced the best results (fig. 6 - 8).

The final section of the lab consisted of Slant-to-Ground Range Transformation. This reduces the geometric distortion of radar images from the slanted angle that the radar images are captured at (fig. 9).

Results



Figure 3. Edge Enhanced image (right) allowing for ridged features to be more discernible.
Figure 4. The merged (left) that incorporates characaristics from both radar and Landsat imagery (left).



Figure 5. Texture Analysis output.

Figure 6. Gaussian histogram stretched image and histogram

Figure 7. Linear stretched image and histogram

Figure 8. Square Root Stretched Image
Figure 9. Slant-to-Ground Range Transformation  corrected image on the right. The corrected image has less geometric distortion than the original image on the left. 



Lab 9: Hyperspectral Remote Sensing

Introduction
The purpose of this lab is to familiarize ourselves with hyperspectral remote sensing. To do this we used ENVI a advanced software that allows for hyperspectral data to be analysed.  Hyperspectal remote sensing aids in identifying land surface more accurately than other traditional types of remotely sensed data. By narrowing the spectral bands than other methods, analysts can better differentiate between land surface materials over tradional remote sensing techniques. Specifically, this lab is designed to introduce us to spectrometry, hyperspectral images and different spectral processing techniques, Fast Line-of-sight Atmospheric Analysis of Hypercubes FLAASH to atmospherically correct hyperspectral image, and determine the state of different vegetation types.

Methods
To begin the lab were asked to extract spectral charactaristics from regions of interest (ROI) from hyperspectral imagery using ENVI. The pre-determoned ROIs were brought into the image aloong with the statistical and spectral plots for each of the ROIs. Each of the ROIs were collected over regions that contained specific minerals. The spectral profiles for this minerals were brought into plots and stacked.  

The next section of this lab envolved using FLAASH to atmospherically correct images. The University did not have the proper lincense to complete the processing of the images so we were given the correct image that was previously corrected by FLAASH. To analyse the corrected image, the original image and the corrected image were brought into ENVI and compared side-by-side. 

The third and final section of this lab was completing vegetation analysis of hyperspectral imagery. This was done by using images that had been previously corrected using FLAASH. This was done by using the Vegetation Index Calculator. Within this tool there are 27 different indices. For this lab we used 3: the Agricultural Stress Tool, the Fire Fuel Tool, and the Forest Health Tool.  The Agricultural Stress Tool measures greenness, canopy water cover, canopy nitrogen, light use efficiency, and leaf pigments. The Fire Fuel Tool measures greenness, canopy water content, and dry or senescent carbon. Lastly, the Forest Health Tool measures greenness, leaf pigments, canopy water content, and light use efficiency.

Results


Figure 1. Image on the left is uncorrected image and the image on the right is FLAASH corrected image. The ROI is that of vegetation and their respective spectral profiles can be seen in the image. 
Figure 2. Agricultural Stress output image
Figure 3. Fire Fuel output image
Figure 4. Forest Health output image
Figure 5. Minimum Noise Fraction tool used in NDVI, used to reduce noise in the image
Conclusion
Hyperspectral Remote Sensing has the ability to produce highly accurate images relative to traditional remote sensing techniques. By using narrower spectral channels hyperspectral remote sensed images have the ability to better delineate different surface charartistics. These charatistics include both chemical and physical properties of image objects.  For this lab we were able to process and analyse images regarding vegetation and minerals. 

Lab 8 Advanced Classifiers 2

Introduction
The purpose of this lab is to be introduced to advanced classification algorithms. Advanced classifiers  offer increased accuracy of remotely sensed images over other traditional classifiers such as unsupervised and supervised classifications. For the purpose of this lab an expert system/decision tree classification with the use of ancillary data and the development of of an artificial neural network to increase the accuracy of image classification.


Part 1: Expert System Classification 
The section of this lab was to use an already classified image and improve upon its classification accuracy (fig. 1). The first step of this process it to build knowledge to train the classifier. The knowledge used to train the classifier can include data that is not contained within the image itself. Such data includes parcel data, census block data, and soil type etc. For this lab we were given a previously classified image. The image we were given had multiple errors throughout the image. There were areas within the city that were misclassified. There were areas such as parks and cemeteries that were classified as being agricultural land (fig 2).
Figure 1. Original Classified image for Eau Claire and Chippewa Falls
Figure 2. Area within the city that has areas that are misclassified. The areas of pink in the northeast section of the image is a graveyard that was misclassified to agriculture.

To improve upon the accuracy of the image by reclassifying the misclassified pixels. This was done in the "knowledge builder" tool in Erdas Imagine. The process can be broken down into three main components, the hypothesis, the rules and the variables. The rules function communicates the relationships between the image classes and the other data. The variable function is where the final outputs are stored.

Figure 3. Knowledge builder used to correct the original image.
The next section of the lab we incorporated ancillary data into the model to increase the accuracy of the original classification. This was done in the knowledge builder tool again in Erdas Imagine. For this section there were more classes created to better separate the different LULC classes through the Eau Claire/Chippewa Falls area. These extra classes include Vegitation 2, Agriculture 2, Residential, and Other Urban.

Figure 2. Knowledge engineer tool used to reclassify the new classification
Once the model was built (fig. 4) ancillary data was then added as a knowledge base file. Once this data was incorporated into the model, the model was then run and a corrected image was then created (fig. 5). Both agriculture and vegitation classes were combined into a single classes to reduce complexity in the final image (fig. 5).
Figure 3. Corrected Image using expert system classification
Part 2: Artifical Neural Network Classification 
The next section for lab evolved the use of an Artificial Neural Network (ANN) to classify an image in ENVI. The ANN works similar to that of the human brain where the computer learns from the inputs that is given. This is done through the use of input, hidden, and output layers. Hidden layers convert input layers into products that the output layers can use. We were given an image that covers the UNI campus. In order for the ANN to work, training samples needed to be collected. For the purposes of this lab we were given multiple Regions of Interest (ROIs) that were to be used to train the ANN (fig. 4). Once the ROIs were brought into the image, the Neural Network tool was used to run the ANN and create a classified image (fig. 5)

Figure 4. False color image of the UNI campus with the three different ROIs used to train the ANN.
Figure 5. Classified image of the UNI campus created from the ANN classification.
Conclusion
Advanced Classifiers help image analysts enhance image classification of both supervised and unsupervised classifications. Advanced classifiers such as expert systems and neural network help delineate classes more accuratly through the use of computer learning and ancillary data.


Friday, March 30, 2018

Lab 6: Digital change detection

Introduction
The objective of this is to use two different forms of land use/land cover (LULC). The first form of change detection used was the a qualitative visual assement, the second was a quantifiable post-classification change detection.

Methods
Part 1: Change detection using Write Function Memory Insertion
The first change detection method used in this was the write function change detection method. This method uses near-infrared bands over different dates placed into different bands (fig. 1). For this lab two images covering western Wisconsin were used. One image from 1991 and the other image from 2011. When the bands of the image were stacked in the combination shown below, areas that had seen change had pink tint (fig. 2). This method allows for an analyst to quickly visually inspect an area an see areas of change in a qualitative sense. While this method does show LULC change it fails to explain to the user which specific LULC classes have changed or the amount change that has occurred.
Figure 1. Color gun combination used to complete the Write Function change detection
Part 2: Post-Classification Comparison Change Detection
For the section of the lab, we were given two classified LULC images for the Milwaukee metropolitan area for both 2001 and 2011. The area of each of the LULC classified images was recorded in meters and converted to hectares in excel (fig. 2). The percent change for each of the classes was then calculated  for each of the LULC classes.
Figure 2. Excel sheet displaying the acreage for each of the LULC
Once this done, a new image was created that showed how the LULC classes had changed over then ten year period using the Wilson-Lula algorithm (fig. 3).
Figure 3. The Wilson-Lula Algorithm used to determine how the LULC had changed between 2001 and 2011.
Results
Part 1: Change detection using Write Function Memory Insertion
Figure 2. The image on the left was taken in 1991 and the image on the right in 2011. Areas of change can be seen in the pink color. For example, areas east of the city of Eau Claire had seen urban development and a new highway was built.
Part 2: Post-Classification Comparison Change Detection
Figure 4. Map created using ArcMap showing how the LULC for the Milwaukee metropolitan area had changed between 2001 and 2011. 
Sources
The images for this lab were provided by Dr. Cyril Wilson.

Saturday, March 24, 2018

Lab 5: Classification Accuracy Assessment

Introduction
In the previous two labs a supervised classification and unsupervised classification images were created. The objective of this lab was complete an accuracy assessment of the previously created images using ground referencing testing samples.

Methods
To begin the lab ground reference testing samples needed to generated in a reference image. This was done in ErdasImagine by having two views open, one with the classified image and the other with the reference image. Then using the accuracy assessment tool, 125 random points (reference points) were distributed throughout the reference image using a stratified random distribution (fig. 1).
Random points generated within the reference image for the unsupervised classified image. 
Once this was done, each of the points were assigned their land cover value (fig. 2). This was done for all 125 points for both classified images (fig. 3).
Figure 2. Classification values used to assign the land cover values for each of the random points
Figure 3. A portion of the list of assigned values for the random points for the unsupervised classified image.
Once the values were assigned for all 125 points for both images (250 total), an accuracy assessment was then generated for each of the two images (fig. 4).

Figure 4. Accuracy assessment created for one of the images. 

For each of the accuracy assessments an error matrix table was then created. This table displays the accuracy for each of the different classifications (fig. 5).
Figure 5a. Error matrix table created for the unsupervised classification image displaying both the user and producer accuracy for each land cover type.
Figure 5b. Error Matrix Table created for the supervised classified image. 
Results/Discussion
The overall accuracy for the supervised classification was 64% and the unsupervised classification was 72.8%. A potential reason for the unsupervised classification producing better results than the supervised classification may be that the training samples generated to create the supervised image did not accurately account for the variability among the spectral signatures for each class. This is especially true for the urban and bare soil classes. Both classification methods were accurate for the water and forest classes. This is because these areas have a relatively low amount of variability in their spectral characteristics. Both classifications had poor accuracy for the bare soil and urban classes. This is because these areas have similar spectral characteristics and the urban class has a very wide variety of spectral signatures within the class.

Sources
Images for this lab were provided by Dr. Cyril Wilson

Friday, March 9, 2018

lab 4: Pixel-Based Supervised Classification

Introduction
The objective of this lab was expand upon the image classification of biophysical and sociocultureal information from remotely sensed images through pixel-based supervised classifications. Building of the previous lab, this will specifically focus on the selection of training samples for a supervised classifier, evaluate the quality of training signatures collect and produce a meaningful informational land use/land cover classes through supervised classification.

Methodsdf
To begin the lab we were asked to collect training samples for a supervised classification using a Landsat 7 (ETM+) image that covered both Eau Claire and Chippewa Counties. To do this, we collected samples of 5 different landuses. Google Earth was used as a reference image to help determine land cover types. This land uses are water, forest, agriculture and bare soil. Each of the different land uses had a minimum number of samples required (figure 1).
Water was the first land cover type that signatures were collected for. Signatures were taken from lakes, ponds, and rivers to account for the various spectral characteristics that different water bodies produce. Once the spectral signitures were collected, the signitures were displayed a chart to make sure that they followed the expected profiles of water. In the case of this image, there were abnormally high values in band 1 which is more likely the result of atmospheric scattering. 
Example of a sample being taken on the Chippewa River
List of samples taken for the water class
Spectral profile for the water samples. 
This procedure was then repeated for the forest, agricultural, urban and bare soil classes.

Spectral Profile for the Forest class
Spectral Profile for Agriculture class
Spectral Profile for the Urban class
Spectral Profile for the Bare Soil class
All the spectral profiles were placed into a single plot. A convergence analysis the Evaluate-Separability function was used to see which four bands had the most separability amongst the different classes. The results showed that the greatest seperability was in bands 1,2,4,6. The average seperability score was 1976, which was quite good. All the training samples spectral signitures were then combined into their respective classes 
All four classes spectral displayed into a single blot
A separability analysis displaying the four bands that showed the greatest amount of separability (1,2,4,6) with a score of 1978. 
All individual spectral signatures combined into their respective classes and plotted together. 
Once the signatures were combined, the table was then saved and used to complete the pixel-based supervised classification. The new classified image was then imported into ArcMap where a final map was created.
Map created in ArcMap displaying newly created supervised classified image
Sources
The images for this lab were provided by Dr. Cyril Wilson

Lab 7: Object-based Classification

Introduction The purpose of this lab is to be introduced to the relativity new object-based classification scheme. This was done through t...