Full text: Proceedings, XXth congress (Part 7)

  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
supervised manner, are very good in classification and inversion 
problems, easy to use, work as universal approximators, have 
very good nonlinearity capabilities and are the most used in the 
feed forward network family. : 
3.4 MLP neural networks 
The most popular class of multilayer feedforward networks is 
multilayer perceptron. MPL usually comprises one input layer, 
one or two hidden layers and one output layer. As an example, a 
four-layer network with two hidden layers can be seen in Figure 
|. In the present study, input nodes correspond to bands of 
imagery, hidden layers are used for computations and output 
layers correspond to the classes to be recognised. Each 
individual neuron is the elemental unit of each layer. It 
computes the weighted sum of its inputs, adds a bias term and 
drives the result thought a generally nonlinear activation 
function to produce a single output. The most common 
activation function is the sigmoid activation function, also used 
in the present study. There are several training algorithms for 
MLP. In a previous study (Topouzelis et al, 2003), four 
algorithms of the gradient decent family were examined: 
Backpropagation (BP), Conjugate Gradient (CG), Resilient 
back propagation (Rprop) and Quick Backpropagation 
(Quickprop). A hybrid algorithm of backpropagation algorithm 
and conjugate gradient found to work fast and reliably 
(Topouzelis et al, 2003) was selected for the present study. 
  
Input Layer Hidden Layers Output Layer 
  
f 
  
2" Hidden 
Layer Layer 
1% Hidden 
  
  
  
Figure 1. An example of MLP network 
3.2 RBF neural networks 
The Radial Basis Function neural network, which has three 
layers, can be seen as a special class of multilayer feed-forward 
networks. Each unit in the hidden layer employs a radial basis 
function, such as Gaussian Kernel, as the activation function. 
The output units implement a weighted sum of hidden unit 
outputs. The input into a RBF network is nonlinear. The output 
is linear. The radial basis function (or Kernel) function is 
centered at the point specified by the weight vector associated 
with the unit. Both the positions and the widths of these kernels 
are learned from training patterns. Each output unit implements 
a linear combination of these radial basis functions. Figure 2 
illustrates the architecture of RBF network. Coefficients pj 
represents the centers of radial basis and wy; are the weighting 
coefficients of the linear combination. 
There are a variety of training algorithms for the RBF networks. 
In the present study, Dynamic Decay Adjustment (DDA) 
Algorithm is used. DDA algorithm uses constructive training 
where new RBF nodes are added whenever necessary. It is 
characterized by fast training (because a few epochs are needed 
to complete training) and guaranteed convergence (SNNS 
1998). The main characteristic of the algorithm is that when a 
training pattern is misclassified, either a new RBF unit 
introduced or the weight of an existing RBF is incremented. 
  
  
  
  
  
Figure 2. An example of RBF network 
Because of the combination of their non-linear characteristics, 
RBF networks are commonly used in complex applications and 
are considered superior to perceptrons networks. In complicated 
cases perceptrons require many neurons, computational power 
and time in order to calculate the hyperplanes which distinguish 
the classes wanted. The main difference in the way that the two 
neuron network models try to solve a classification problem is ‘ 
illustrated in figure 3. MLP calculates hyperplanes in order to 
separate classes while RBF uses kernels to group pixels from 
the same class. To our knowledge, comparisons of different 
neural network models for the problem of oil spill detection are 
not available in the literature. In this paper, we present a 
comparison between the two commonly used neural network 
models, RBF and MLP neural networks. 
  
  
Hyperplane Kernel tunctior 
MLP RBF 
  
  
  
Figure 3. MLP and RBF classification approach 
4. SAR IMAGES AND DATASET DESCRIPTION 
4.1 General overview 
The method developed was applied on an ERS 1 image 
captured on 1/6/1992 (orbit 4589, frame 2961). The image 
represents a rough sea surface, efficient to produce a strong 
contrast signal in the presence of oil spills. It also contains 
lookalikes in the left part, caused by different sea state (local 
wind falls in a big swell wave). In the experiments 
implemented, it was observed that the number of inputs 
726 
  
Intern 
  
signifi 
increa 
scene 
while 
The m 
its pe 
quality 
The m 
netwol 
param 
genera 
(MLP 
In arg 
contrib 
(Topot 
succes: 
image. 
be fun 
was ch 
compat 
Networ 
(input | 
present 
— 
  
4.2 Pre 
Preparing 
data into 
minimun 
feature e 
could ser 
1997). Ir 
classifica
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.