c
15040 E 150"42'E 150"44'E 150°46'E 150°48°E 150°S0°E
Re ad x + :
S,9.v£
34*8'8
34°10'S
S.OL.PE
34°12'S
SELLE
34°14'S
S vl. t
34°46'S
SALSE
150*40'E 150°42'E 150°44'E 150'49'E 150°48°E 150'50'E
Figure 3. Landsat 5 TM+ images (false colour) acquired on
10/09/2010
3. METHODOLOGY
3.1 Data processing
Both SAR data (ALOS/PALSAR & ENVISAT/ASAR) and
Landsat 5 TM-- images were registered to the same map
coordinate system (UTM projection, WGS84 datum) and
resampled to 15m pixel size. The Enhanced Lee Filter with a
5x5 window size was applied to filter speckle noises in the SAR
images. SAR backscattered values were converted to decibel
(dB) by the following equation:
Db -10xlog,, (DN?) (1)
where Db, DN are magnitude values.
Textural data, which provides information on spatial patterns
and variation of surfaced features, plays an important role in
image classification (Sheoran et al. 2009). In this work, first-
and second-order texture measures (grey-level co-occurrence
matrix or GLCM) were extracted for classification. The First
Principal Components (PCI) images computed from each of
multi-date ALOS/PALSAR, ENVISAT/ASAR and Landsat 5
TM+ image datasets were used for generating multi-scale
textural information. Finally, three first-order texture measures,
including Mean, Variance and Data range, and four GLCM
texture measures, namely Variance, Homogeneity, Entropy and
Correlation with 4 window sizes 5x5, 9x9, 13x13 and 17x17
were selected.
Normalized Difference Vegetation Indices (NDVI) images were
computed from the Red and Near Infrared bands of Landsat 5
TM- images. Four different combined datasets had been
generated and applied for the classification processes (Table 2).
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
ID Datasets Nf
1 | Six-date PALSAR + six-date ASAR images 12
2 | Three-date Landsat 5 TM images 18
3 Three-date Landsat 5 TM + six-date PALSAR + 30
+ six-date ASAR images
Three-date Landsat 5 TM + six-date PALSAR + 17
4 | six-date ASAR + Landsat 5 TM & SAR’s textures 3
+ three-date NDVI images
Table 2. Combined datasets for land cover classification in
Appin area; Nf is number of input features.
3.2 Classification
Three non-parametric classifiers are employed for classification
processes, including Artificial Neural Network (ANN) with a
Back Propagation algorithm (BP), Kohonen Self-Organizing
Map (SOM) and the Support Vector Machine (SVM).
Artificial neural network (ANN)
Artificial Neural Networks are nonparametric method which has
been used largely in remote sensing, particularly for
classification (Tso and Mather 2009, Bruzzone et al. 2004).
The Multi Layer Perception (MLP) model using the Back
Propagation (BP) algorithm is the most well-known and
commonly used ANN classifiers. The ANN classifier often
provides higher classification accuracy than the traditional
parametric classifiers. (Dixon and Candade 2008, Kavzoglu
and Mather 2003). In this study, we used the MLP-BP model
with three layers including input, hidden and output layer. The
number of neurones in the input layer is equal to a number of
input features, the number of neurones in output layer is a
number of land cover classes to be classified. The optimal
number of input neurones and a number of neurones in the
hidden layer was searched by GA techniques. We used the
sigmoid function as a transfer function. The other important
parameters were set as follows: Maximum number of iteration:
1000; learning rate: 0.01-0.1; training momentum: 0.9. The
classification were run using the Matlab 2010b ANN toolbox.
Self-Organizing Map Classifier (SOM)
The Self-Organizing Map (SOM), which was developed by the
Tewu Kohonen in 1982, is another popular neural network
classifier. The SOM network has unique property that it can
automatically detects (self-organizing) the relationships within
the set of input patterns without using any predefined data
models (Salah et al. 2009, Tso and Mather 2009). Previous
studies revealed that SOM are effective method for classifying
remotely sensed data (Salah et al. 2009, Lee and Lathrop 2007).
In this work, the input layer is dependent on different input
datasets. The output layer of SOM was a two dimension array of
15x15 of neurons (total 225 neurons). The neurones in the input
layer and output layer are connected by synaptic weights which
are randomly assigned within a range of 0 to 1.
Support Vector Machines (SVM)
SVM is also a favourite non-parametric classifier. This is a
recently developed technique and is considered as a robust and
reliable in the field of machine learning and pattern recognition
(Waske and Benediksson, 2007, Kavzoglu and Colkesen 2009).
SVM separates two classes by determining an optimal hyper-
plane that maximises the margin between these classes in a
multi-dimensional feature space (Kavzoglu and Colkesen 2009).
Only the nearest training samples — namely ‘support vectors’ in
the training datasets are used to determine the optimal hyper-
plane. As the algorithm only considers samples close to the
class boundary it works well with small training sets, even when
high-dimensional datasets are being classified. The SVMs have
been applied successfully in many studies using remote sensed
imagery. In these studies the SVMs often provided better (or at
least at same level of) accuracy as other classifiers (Waske and
Benedikson, 2007). In this work, the SVM classifier with a
Gausian radical basis function (RBF) kernel has been used
because of its highly effective and robust capabilities for
handling of remote sensing data (Kavzoglu and Colkesen 2009).
Two parameters need to be optimally specified in order to
ensure the best accuracy: the penalty parameter C and the width
of the kernel function y. These values will be determined by the
GA algorithm while searching for optimal combined datasets.