Full text: Technical Commission VII (B7)

2012 
rn and local 
>s was done 
while the 
the look-up 
of original 
correspond 
of objects 
els, provide 
rs extracted 
tionship can 
mote better 
'sponses. In 
others was 
eCognition 
  
omentation" 
ept (OBIA) 
factor was 
tures in the 
T-2). After 
is done by 
archy. For 
lue of each 
ing process 
ammed in a 
objects was 
geometric 
y), physical 
n or max 
f other dark 
nd route of 
t al., 2006, 
ck textural 
>nce matrix 
ular second 
ropy, mean 
001). 
hape of the 
riant planar 
1e standard 
s belonging 
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012 
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia 
to the region of interest, selected by the user surrounding 
the object. 
- ConLa (Local area contrast ratio) is the ratio between the 
mean backscatter value of the object and the mean 
backscatter value of a window centred at the region. 
- Opm (Object power to mean ratio) is the ratio between 
standard deviation of the object and the mean of the 
object. 
- Opm/Bpm (Power to mean ratio) is the ratio between the 
object power to mean ratio and the background power to 
mean ratio. 
- OSd (Object standard deviation) is the standard deviation 
of the object. 
- P/A (Perimeter to area ratio) which is the ratio between the 
perimeter (P) and the area (A) of the object. 
- C (Object complexity) describes how simple (or complex) 
the geometrical objects. 
- THm (Mean Haralick texture) is computed based on the 
average of the grey level cooccurrence matrices of the 
sub-objects. 
In this study SP2, P/A, and C were used for geometrical 
characteristics; BSd, ConLa, Opm, Opm/Bpm, and OSd were 
used for physical characteristics, and THm was used for 
textural characteristic (Topouzelis et al., 2009, Ozkan et al., 
2011). Totally 7 oil objects from RADARSAT-2 and 8 oil 
objects from ALOS PALSAR were extracted. All of these 
objects are used for the testing purpose. The statistics of oil 
objects are given in Table 2. 
Table 2. Statistics of features obtained from oil objects. 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
Features E RADARSAT-2 
Minimum | Maximum | Mean Std. 
SP2 0.172 1 0.67 0.269 
BSd 35.583 52.582 47.382 6.153 
ConLa 0.334 0.544 0.455 0.085 
Opm 0.37 0.697 0.55 0.136 
Opm/Bpm 1.11 1.551 1.383 0.161 
OSd 16.365 34.026 25.827 6.98 
P/A 0.076 0.293 0.182 0.067 
C 1.543 5.442 3.628 1.512 
THm 5.622 23.371 13.056 6.922 
ALOS PALSAR 
Minimum | Maximum | Mean Std. 
SP2 0.319 1 0.719 0.196 
BSd 148.546 785.317 | 388.672 | 218.870 
ConLa 0.590 0.769 0.688 0.066 
Opm 0.124 0.212 0.148 0.030 
Opm/Bpm 0.260 1.132 0.679 0.364 
OSd 104.009 182.516 135.732 |. 34.523 
P/A 0.080 0.573 0.367 0.198 
C 1.377 2.524 2.167 0.358 
THm 1.092 17.010 5.487 6.922 
— Classification 
Artificial neural networks are computational systems based 
on the principles of biological neural systems, ie. it is a 
mathematical model composed of many neurons operating in 
parallel. These networks have the capacity to learn, memorize 
and create relationships amongst data. They have some 
advantages such as their non-parametric and non-linear 
nature, arbitrary decision boundary capabilities, easy 
adaptation to different types of data and input structures, and 
good generalization capabilities over classical statistic and 
analytic approaches. Although the network design as a 
69 
classifier is a hard task despite the increment in the 
performance of classification, an approach for oil spill 
detection based on a Multilayer Perceptron (MLP) neural 
network are described in recent research studies (Del Frate et 
al., 2000), (Topouzelis et al., 2005). The optimization process 
to determine weight and bias parameters of ANN is called 
learning. Backpropagation Delta rule (BP) is the most well 
known learning algorithm. Besides to derivative based 
conventional approaches such as Levenberg-Marquardt 
(LM), some heuristic optimization methods such as Genetic 
Search have been used in learning phase of ANN, as well. 
Artificial Bee Colony (ABC) is such an optimization method 
that can be used for ANN. Artificial Bee Colony algorithm is 
a new meta-heuristic population based swarm intelligence 
algorithm developed by Karaboga (2005). The ABC 
algorithm mimics the intelligent foraging behaviour of 
honeybee swarms. The first researches about ABC algorithm 
focused into examining the effectiveness of ABC for 
constrained and unconstrained problems against other well- 
known modern heuristic algorithms such as Genetic 
Algorithm (GA), Differential Evolution (DE), and Particle 
Swarm Optimization (PSO) (Karaboga and Basturk, 2007; 
Karaboga and Akay, 2009). Later on, ABC has been used for 
ANN classifier training and clustering problem (Karaboga 
and Ozturk, 2009) where some benchmark classification 
problems were tested and the results were compared with 
those of other widely-used techniques. 
APPLICATION and RESULTS 
The artificial neural network models used in this study are the 
ones that were used in the earlier study (Ozkan et al., 2011) 
related to oil spill detection in the Lebanon coast in 2007. 
Since the main purpose of the paper is to examine the 
generalizing capability of different ANN learning algorithms, 
no training process was applied. The network topology used 
is 9-6-2 consisting one hidden layer with 6 neurons. Input and 
output layers are fixed by the dimension of input and output 
patterns. Two classes (oil and look-alike) are represented by 
2 neurons in the output layer and nine different features are 
represented by 9 neurons in the input layer. The logarithmic 
sigmoid transfer function is employed at hidden and output 
layer neurons. In total, 74 parameters are used. The details of 
the parameters of the optimization algorithms ABC, LM and 
BP and other application features can be found in Ozkan et 
al. (2011). 
As in Ozkan et al. (2011), each algorithm has been run 30 
times independently to reveal the robustness of the 
algorithms used. For epoch numbers, two different 
approaches were considered: (i) all algorithms were trained 
1000 epochs; and (ii) different optimum epoch numbers for 
each algorithm were used. The robustness can be defined as 
the consistency of performances of multiple runs, i.e. the 
narrower interval the error values of algorithms are clustered 
in, the better robustness is. Good robustness means that the 
algorithm is not sensitive to changing of the initial 
conditions. Since the artificial intelligence techniques can 
always produce some results that must not necessarily be 
optimal, robustness is a useful measure in the comparison of 
such type algorithms. Therefore, the test data results from 
BP, LM, and ABC algorithms are compared to each other in 
terms of mean and standard deviation descriptive statistics 
obtained from 30 independent runs. The producer accuracies 
and descriptive statistics of ABC, LM and BP algorithms 
through both 1000 epochs and optimum epochs are given in 
Table 3 and Table 4. 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.