Full text: Proceedings, XXth congress (Part 1)

The neural panorama is extremely large and neural algorithms 
have been developed to resolve very different kinds of 
applications: it is the choice of application that determines the 
choice of algorithm. 
Attention has been paid to the MLP algorithm (Multi Layer 
Perception) to obtain a geometric correction of satellite images. 
Function approximation and estimation properties of this 
algorithm (non-linear) have already been widely described in 
literature. The basic idea is that of substituting the upward 
projection model that relates the image coordinates ( 5,77) to 
the coordinates of the object (X, Y and Z) with an MLP neural 
network opportunely trained on the basis of the GCPs. 
The reasons for this choice arose out of an analysis of the 
problems connected to the previously described RFM approach. 
Neural networks preserve from a forced linearisation of the 
equations around an approximated solution. They give a non- 
linear response to a non-linear problem, whose efficiency 
increases, just similarly to RFMs, with the growing of the 
number of GCPs and with the decrease of the original image 
deformations. 
In the MLP network each neuron performs a very simple 
operation that consists in generating, through an opportune 
function, which is known as the transfer function, a response to 
the signals that converge on it through communication 
channels. These channels simulate the biological synapses and 
their duty consists in “weighting” the intensity of the 
transmitted signals: for this reason they are known as “synaptic 
weights” or simply “weights”. 
Formally the response signal (u;) that is restituted by the generic 
neuron i" is equal to: 
N 
H, JO wp, t b) (8) 
j=l 
where / is the transfer function which normally takes the shape 
of a hyperbolic tangent (9) or of a logical sigmoid (10), w; are 
the weights of the i" neuron, pi are the input at the i" neuron 
(N) and 5; are scalar additives, called bias, that are considered 
as weights of unitary additional input (Figure 2). 
  
i 
f(x) = ET Hyperbolic tangent (9) 
; ]-e" 
f(x) x Logical sigmoid (10) 
+e“ 
i h 
ROT 
x) ^b 
IARE AER d: 
Dur n y 
ip Zn 
“Z} b 
p Vu 
? 
Figure 2 — Mathematical model of a two-layer computational 
MLP neural network (hidden and output). 
  
   
  
   
   
   
  
   
   
  
   
    
    
   
    
   
   
   
    
   
   
  
   
   
   
   
  
  
   
  
  
  
   
  
   
   
     
    
  
  
  
   
   
  
  
  
  
  
  
  
   
   
    
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part Bl. Istanbul 2004 
This type of algorithm belongs to the feed-forward family of 
neural networks, that is, networks in which the information 
travels in parallel and in a single direction. 
The MLP network therefore constitutes a mathematical model 
in which the parameters are the weights and the biases of the 
hidden and of the output layers. The estimation of the values of 
these parameters on the basis of opportune samples (patterns), 
represents the training step of the network. In this application 
the training algorithm is the optimised (for a greater 
convergence speed) Error Backpropogation (EBP), known as 
Levenberg-Marquardt (LM). 
In the EBP algorithm the network weights assume values that 
minimise (local minimums) the Performance Function (PF). 
This is defined, for a batch training, as: 
2 K pr 
PEW (1) = > i nda EE aed.) 
= 
where W(t) = [wy wa.....wy]' is the weight vector of the 
network at epoch /, ¢ counts the epochs of the training process 
and it is fixed by the operator, d;, is the expected value (rarger) 
of the A output relative to the p" training pattern, Hp isthe 
value of the 4" output calculated by the network, 
E(t)=[e;1.621..-€41.€12--€12€;p-exp]" , in which ej (dif) . 
keep Ko p-e1- , P, is the cumulative error of a batch 
training. 
MATLAB 5.3 Neural Network Toolbox routines have been 
used. An upward "orthoprojection" approach has been adopted 
so that the coordinates of the object (X, Y, Z) constitute the 
network input and the coordinates of the image (4,7) 
e 
constitute the output. Only one hidden layer and one output 
layer have been foreseen. Two network configurations have 
been verified and implemented with respects to the possible 
transfer functions that can be used. In the first case, the transfer 
function adopted for the hidden layer, which results more 
appropriate for the treatment of pushbroom images (to which 
the here presented results refer) is a hyperbolic tangent (9) 
while, for the output layer, it is a simple linear function (purely 
weighted sum). 
[In the second case, which is considered more suitable for the 
treatment of whiskbroom images, a logical sigmoid transfer 
function (10) has been adopted for the hidden layer while, also 
in this case, a simple linear function has been considered for the 
output layer. 
The number of neurons (of the hidden layer) that drive to best 
performances has to be determined each time on the basis of 
repeated tests: in the MATLAB developed routine it is the 
calculator itself that does this automatically. 
It should be recalled that an approximate estimation (as we are 
working in a non-linear ambit, these considerations are purely 
indicative) of the maximum number of admissible neurons 
could be obtained by comparing the training pattern number 
(the GCPs) with that of the parameters to be estimated (weights 
and bias). This results to be equal in number to: 
7 
= ; 2 
param = (3 viz) M prsimicdten) * M nistiadon T £u M +2 (12) 
where M is the number of neurons of the hidden layer. 
More precise indications could be derived from a careful 
analysis of the results (residuals), verifying the possible 
appearance of overfitting phenomena, which, as a first 
approximation, can be identified in the progressive spread of 
   
Internatior 
the differe 
the Check 
The accur: 
of residua 
extent witl 
weights an 
of the trair 
quite negli 
The archit 
is able to 
has to sup} 
e the ra 
maxim 
to be d 
GCPs 
numbe 
capaci 
approx 
e the nm 
archite 
repeat 
results 
neuron 
how tl 
initiali 
minim 
the tra 
times f 
e level o 
on the 
basis 0 
where 
The thresl 
orthoimage 
helps, in a 
network (t 
approxima 
RMSE. 
The RFM 
images acc 
of the pl 
evaluation 
the estima 
(which wei 
of the resi 
error. A g 
on the enti 
validity o 
increase in
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.