Full text: Proceedings, XXth congress (Part 7)

BY 
initored 
ne days 
ime and 
/e point 
ditions. 
rom the 
rs. Such 
hquakes 
disaster 
complex 
pressed 
etworks 
ding the 
volume. 
systems 
perform 
perform 
fastest 
method 
such as 
ing and 
It is very 
al. 
| MIC 
| signals, 
'ading to 
1, acting 
resulting 
als (Fig. 
Output 
signals 
X 
>» Y 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
Figure 1. Schematic representation of the bridge as a system 
Heunecke (1995) and Welsch (1996) have classified dynamic 
system identification models into three main types; parametric 
or white box, grey and non-parametric or black box models. 
If the physical relationship between input and output signals, 
i.e. the transmission or transfer process of the signals through 
the object — in other words — the transformation of the input to 
output signals, is known and can be described by differential 
equations, then the model is called a parametric or white box 
model (Welsch and Heunecke 1999). Models using chosen a 
priori model structure or partially motivated physical analysis 
are the so-called grey box models whereas non-parametric or 
black box models experimentally identify the dynamic process. 
Artificial neural networks are from the family of black box 
models which can map input domain into any given output 
domain. Despite mapping of complex relationships between 
input and output signals is successfully provided, one can not 
make any inference just by looking at the transmission or 
transfer phase of the neural network. The following sections 
describe the neural networks and their use in deformation 
modelling. 
3. ARTIFICIAL NEURAL NETWORKS 
Artificial neural networks are the simulation of human brain 
regarding the functional relationship between the neurons. A 
neuron is the basic processing unit in the human brain which 
have synaptic connections with other neurons in order to 
produce a decision or inference as the output signals. Biological 
systems are able to perform extraordinarily complex 
computations in the real word without recourse to explicit 
quantitative operations. This property of the biological nervous 
system has encouraged scientists to adapt the same structure as 
a mathematical tool for identification of complex systems. 
Indeed this idea was not quite new; the major improvement of 
artificial neural networks has begun in the last decades with the 
development in computer technology. The learning capability 
of organic neurons were then easily imitated by using 
computers, since the computations of the network parameters in 
an iterative procedure including derivatives and gradients of the 
performance functions was extremely difficult to handle. Figure 
2 depicts the structure of a single neuron in an artificial neural 
network. The function of an artificial neuron is similar to that of 
a real neuron: it integrates input from other neurons and 
communicates the integrated signal to a decision making centre. 
  
input o sd Weights Summation Output 
nn © 11 b 
Xi; —M © —" x) of fla) | — fie (y) 
: w Activation 
. function 
X — (©) 
  
  
  
Figure 2. Single artificial neuron 
The functional operation of a neuron is summarized as 
703 
1 
ef ZZ 
5 fe) | - exp(- fa, ) 
(1) 
with 
a, = NE w,x, +b, 
= 
(2) 
where y; is the activity output of neuron i, a; is the weighted 
sum of the neuron i from the input of the neurons in the 
previous layer, b; is the bias term of the neuron i, x; is the input 
from the neuron j, w; is the weight between two neurons i and j, 
and the constant £ is threshold value which shifts the activation 
function f{a) along the x axis. An activation function is a non- 
linear function that, when applied to the input of the neuron, 
computes the output of that neuron. There exist various types of 
activation functions in neural computing applications such as 
hyperbolic tangent, Heaviside, Gaussian, multi-quadratic, 
piecewise linear functions, etc (Haykin, 1994). The one given in 
Eq. (1) is the most commonly used so-called sigmoid function. 
3.1. Multilayer Networks 
Multilayer networks are the most commonly known feed- 
forward networks. Neural networks typically consist of many 
simple neurons located on different layers and operate in 
cooperation with the neurons on the other layers in order to 
achieve a good mapping of input to output signals. The 
expression "feed-forward" emphasize that the flow of the 
computation is from input towards the output. There are three 
different types of layers in the concept of neural networks: the 
input layer (the one to which external stimuli are applied to), 
the output layer (the layer that outputs result), and hidden layers 
(intermediate computational layers between input and output). 
Theoretically, there is no limitation given for the number of 
hidden layers in a network configuration. That is, however have 
a great effect in the computation time as well as the number of 
neurons in hidden layers. Therefore, a compromise has to be 
found in order to achieve an optimal network configuration with 
an acceptable convergence time and quantitative precision. 
Figure 3 gives a sample configuration of a multilayer feed- 
forward (MLFF) network with one input, one output and one 
hidden layer. Note that the network consists of five inputs and 
one output. 
  
  
G y(k) 
  
  
  
Figure 3. A schematic representation of a multilayer feed- 
forward (MLFF) neural network 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.