Full text: Proceedings; XXI International Congress for Photogrammetry and Remote Sensing (Part B7-1)

255 
AN EXPLORATORY ANALYSIS OF IN SITU HYPERSPECTRAL DATA 
FOR BROADLEAF SPECIES RECOGNITION 
R. Pu 
Department of Geography, University of South Florida, 4202 E. Fowler Ave., NES 107, Tampa, FL 33620 USA - 
rpu@cas.usf.edu 
Commission VII, WG VII/3 
KEY WORDS: Species Recognition, Hyperspectral Data, Broadleaf Species, Artificial Neural Network, Linear discriminant 
Analysis, Spectral Features. 
ABSTRACT: 
Timely and accurate identification of tree species by spectral methods is crucial for forest and urban ecological management. It has 
been proved that traditional methods and data cannot meet such requirements. In this study, a total of 394 reflectance spectra 
(between 350 and 2500 nm) from foliage branches or canopy of 11 important urban forest broadleaf species were measured in the 
City of Tampa, Florida, U.S. with a spectrometer. The 11 species include American Elm (Ulmus americana), Bluejack Oak (Q. 
incana), Crape Myrtle (Lagerstroemia indica), Laurel Oak (Q. laurifolia), Live Oak (Q. virginiana), Southern Magnolia (Magnolia 
grandiflora), Persimmon (Diospyros virginiana), Red Maple (Acer rubrum), Sand Live Oak (Q. geminata), American Sycamore 
(Platanus occidentalis), and Turkey Oak (Q. laevis). A total of 46 spectral variables, including normalized spectra, derivative spectra, 
spectral vegetation indices, spectral position variables, and spectral absorption features were extracted and analyzed from the in situ 
hyperspectral measurements. Two classification algorithms were used to identify the 11 broadleaf species: a non-linear artificial 
neural network (ANN) and a linear discriminant analysis (LDA). An ANOVA analysis indicates that the 30 selected spectral 
variables are effective to differentiate the 11 species. The 30 selected spectral variables account for water absorption features at 970 
nm, 1200, and 1750 nm and reflect characteristics of pigments in tree leaves, especially variability of chlorophyll content in leaves. 
The experimental results indicate that both classification algorithms (ANN and LDA) have produced acceptable accuracies (OAA 
from 86.3 % to 87.8%, Kappa from 0.83 to 0.87) and have a similar performance for classifying the 11 broadleaf species with input 
of the 30 selected spectral variables. The preliminary results of identifying the 11 species with the in situ hyperspectral data imply 
that current remote-sensing techniques are still difficult but possible to identify similar species to such 11 broadleaf species with an 
acceptable accuracy. 
1. INTRODUCTION 
The need for detailed forest parameters (species, size, and 
number of trees), biophysical properties (canopy density and 
leaf area index (LAI)), and canopy chemical composition over 
large land holdings in the U.S., has increased markedly in the 
last decade (Gong et al., 1999). Mapping forest area or tree 
species identification is usually based on aerial photo 
interpretation and moderate-resolution satellite image 
classification. Aerial photo interpretation is dependent on the 
experience of photo interpreters and some experiments indicate 
large discrepancies among photo interpretation by different 
interpreters (Biging et al., 1991; Gong and Chen, 1992). It is 
also difficult to develop detailed accurate individual tree crowns 
and tree canopy maps because of the limited spatial resolution 
of existing satellite imagery such as SPOT HRV, and Landsat 
TM/ETM+ data (Congalton et al., 1991; Brockhaus and 
Khorram, 1992; Franklin, 1994; Carreiras et al., 2006). 
During the last two decades, researchers have used high spatial 
resolution satellite sensors (< 5 m resolution, such as IKONOS 
and QuickBird) and hyperspectral data [such as Airborne 
Visible Infrared Imaging Spectrometer (AVIRIS)] to extract 
detailed forest parameters such as tree species and mapping 
forest canopy (e.g., Wang et al., 2004; Xiao et al., 2004; 
Buddenbaum et al., 2005; Johansen and Phinn, 2006). The 
preliminarily results of evaluating capabilities of those high 
resolution data in identifying tree species and mapping tree 
canopy indicate that the accuracy is not desirable (Asner et al., 
2002; Carleer and Wolff, 2004; Johansen and Phinn, 2006). In 
mapping urban forest species with hyperspectral image data 
AVIRIS, Xiao et al. (2004) reported a relatively low overall 
accuracy (OAA=70%) for identifying 16 tree species with 
AVIRIS data although they successfully discriminated between 
three forest types with OAA=94%. Classifying coniferous tree 
species with HyMap using geostatistical methods, the 
classification of accuracy (Kappa) of the tree species was only 
0.74, a result comparable to that obtained with stem density 
information derived from high spatial resolution imagery 
(Buddenbaum et al., 2005). Therefore, it is still necessary to 
conduct further research in recognizing tree species and 
mapping tree canopy using either high spatial or high spectral 
resolution remote-sensing data, including in situ hyperspectral 
measurements (e.g., Gong et al., 1997; Cochrane, 2000). 
In this study, further evaluation of the capabilities of in situ 
hyperspectral data in recognizing 11 broadleaf species in an 
urban environment was conducted with in situ hyperspectral 
measurements, collected with an ASD spectrometer 
(FieldSpec®3, Analytical Spectral Devices, Inc., U.S.). 
Therefore, the objectives of this analysis consist of (1) 
examining the analysis capability of hyperspectral data for 
identifying major broadleaf tree species in the City of Tampa, 
Florida, (2) evaluating effectiveness of spectral features 
extracted from the in situ hyperspectral data, and (3) comparing 
the performance of the artificial neural network (ANN) and 
linear discriminant analysis (LDA) techniques in identifying 
broadleaf species.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.