International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004
4.5 Accuracy Assessment of the Classification Result
The class mapping accuracy of NLP is 70.69% which is higher
than what the ML classifier produced. Moreover, the kappa is
0.75 which can be considered acceptable.
4.6 Comparison of Classification Method
4.6.1 Classification Accuracies
Figure 12. shows a comparison of accuracies between the ML
and the SP classified images. The SP classification scored
higher in all three accuracy measures compared to the ML
classification. The significance of this difference was tested
using the Z-test. The Z-test showed that the kappa of the ML
map (0.57 versus 0.75) is significantly lower than the kappa of
the SP map (Z-test for kappa, Z= 2.04, P= 0.042). Therefore, it
can be concluded that the SP method performed better in
detecting single tree felling
Percentage
OA KA CA nip
Figure 12. Comparison of accuracies between ML and SP
classified maps. Note: OA: Overall Accuracy, KA: Kappa, CA
nip: Class Accuracy of NLP.
4.7 Single Tree Detection
This section deals with the detection of NLP by the ML
classifier compared to the SP classifier since the latter was
proven to perform better (see previous section). NLP detections
by both classifiers are shown in Figure 14, which is a subset of
the map shown in Figure 13. NLP detection by SP classifier is
depicted in red, while ML detections are shown in yellow.
Common NLP detections are depicted in blue. A quantification
of the difference in detection between the classifiers is done.
Approximately 14% of the NLP detected by the SP classifier
was misclassified as other by the ML classifier. Moreover, the
ML classifier misclassified 28% of the pixels that was detected
as other by the SP classifier, as NLP. This illustrates the
difference in detection between the ML classifier and the SP
classifier.
The map in Figure 14 gives an idea where the ML misclassified
pixels as NLP and where it missed pixels containing NLP. The
red colour depicts SP detections of NLP which were missed by
the ML classifier which is about 14% of the total amount of
pixels detected. Most of these pixels are found in the lower left
part of the image. The pixels that were misclassified as NLP by
the ML are coloured yellow. Most of these pixels are also locate
in the lower left part of the image, but many are also found
along the main road. The pixels that were correctly classified by
ML classifier is shown in blue. These are concentrated along
the road.
The accuracy of the Maximum Likelihood classification of the
30 m resolution image was found lower than the IMAGINE
Subpixel classifier. This finding is in agreement with the
finding of Bhandari (2003) who found similar results in
detecting selective logging in the Labanan concession using the
IMAGINE Subpixel classifier.
The significance of difference was tested positive which means
that the IMAGINE Subpixel classifier is a better method than
the Maximum Likelihood in detecting single tree felling in the
tropical forest using Landsat-7 ETM+ imagery. Furthermore,
the class mapping accuracy of single tree felling by the
Maximum Likelihood classifier was also lower than the
IMAGINE Subpixel classifier (61% versus 71%). The second
additional data set used in the Maximum Likelihood
classification improved the class mapping accuracy of single
tree felling with 2% compared to the 30 m resolution image.
But due to time limitation it was not studied more in depth.
Comparison of both classified maps revealed that 31% of the
NLP was commonly detected by both classifiers. The ML
classifier detected 28% more NLP than the Subpixel classifier,
but missed 14% of NLP that was detected by the SP classifier.
The 28% that was detected by ML classifier was classified as
“Other” by the SP. The superior performance can be explained
by the different signature derivation process between these two
classification techniques. The Maximum Likelihood classifier
develops signatures by combining the spectra of training set
pixels which includes the contributions of all the materials in
the training set. Whereas, the signature developed in the
IMAGINE Subpixel classifier is the extracted component of the
pixel spectra that is most common to the training set. Upon
deriving the signature, the Maximum Likelihood classifier
identifies pixels in the scene that have the same spectral
properties as the signature. The IMAGINE Subpixel classifier,
however, estimates and removes the subpixel background and
compares the residual spectrum with the signature,
The IMAGINE Subpixel classifier also addresses the spectral
distortion of atmosphere and sun angle effects within an image.
For this reason, the developed signature of the new logged
points (i.e. single tree felling) in this research can be applied to
other Landsat-7 ETM+ images captured at different times and
other parts of the concession. However, the discrimination of
single tree felling from other materials with similar reflectance
should be carried out using GIS and additional data such as
logging maps and land use maps.
Furthermore, the Maximum Likelihood classifier has been used
for many years and is supported by many GIS & RS based
software such as ILWIS and ERDAS. It is also straight forward
in implementation. The MAGINE Subpixel classifier on the
other hand is a relatively new product that is only available with
ERDAS. It is one of the few RS image processing software that
deals with mixed pixels. The IMAGINE Subpixel classifier 1s
not straight forward in use. A user with prior experience in
using traditional supervised multi-spectral classifiers can get
acquainted with the software in less than a day by running the
tutorial. However, the specific signature derivation and image
classification technique is more complex and will thus take
more time to learn. But given the superior result of the
IMAGINE Subpixel classifier compared to the Maximum
Likelihood classifier it is worth to invest in the purchase and
938
Int
use
nul
ha
Fig
Cla
Figi
SP
The
dete
Sub
that
othe
imp
char
liste
The
com
sing
ima;