Full text: Proceedings, XXth congress (Part 3)

   
OM model. The 
ron grid and an 
about the object 
the image pixels 
een involved in 
late matching to 
neural network 
simulated linear 
s in classified 
Cohonen's SOM 
WORKS 
self-organizing 
| learning of the 
Fhe topology of 
exagonal. The 
ht) vector of 
(1) 
ilar coordinate 
the first is the 
' second one is 
gorithm starts 
ust be found by 
(3) 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
The second step of the learning algorithm is a weight update for 
epoch /+1 in a simple way: 
m(+1)=m,()+h,(0)[x-m,(0)] » (4) 
where / means the epoch, and the coefficient function is defined 
as follows: 
a(t) if ieN,(t) (5) 
hr) 
a ) 0 otherwise 
This formula contains the speciality of the Kohonen model, 
namely the time dependent consideration of the neurons' 
neighborhood in form of N,(f). The neighborhood can be 
interpreted as concentric squares around the winner in case of a 
rectangular neuron lattice. If the neighboring neurons are in the 
acceptable region, their weights will be updated by a factor of 
a(t). For this function the limits are known: it must be 
between 0 and 1, mostly it decreases monotonically. 
3. THE SONG MODEL 
As the author of this paper applied the Kohonen type neuron 
model for road junctions, great anomalies were recognized. This 
was caused by the regularity of the grid structure, which was 
quite far from the graph-like road crossings. 
The basic idea of the extension of the SOM model was to 
integrate the graph structure with its flexibility in form. The 
newly named Se/f Organizing Neuron Graph (shortly SONG) 
model has an undirected acyclic graph, where the computational 
elements are located in the graph nodes. The edges of the graph 
ensures the connections between the participating neurons. The 
learning algorithm of the SONG technique has kept the 
Kohonen rule for winner selection and weight update, but the 
neighborhood had to be formulated in a different way. 
The connections (edges) of the graph are mostly given by the 
adjacency, commonly in form of a matrix. The elements of the 
matrix are defined after Equation 6: 
_ [1 if node i and j are connected (6) 
" [|0 otherwise 
This matrix is theoretically a symmetric square matrix of q X q. 
The nonzero matrix elements represent the direct relations 
between two neurons. The term adjacency can be extended in 
order to express more information than a binary "direct 
neighbor" or not. Therefore the graph algorithms are 
implemented, which derive the generalized adjacency matrix 
A“, where k x n . This generalized adjacency matrix has only 
zero elements in the main diagonal (there are no loops), all the 
other values give the number of the intermediate edges between 
two nodes. The most simple generalization algorithms are the 
matrix power technique (Henley 1973; Swamy 1981) and the 
Floyd-Warshall method (Warshall 1962; Reinhard 2002). 
The generalized adjacency matrix makes the modification of the 
learning coefficient formula for graphs possible: 
  
alt) ifA* «d(r) (0) 
0 otherwise 
In the above expression the condition is evaluated for the row of 
the winner neuron of the matrix. 
As it was mentioned regarding the Kohonen model, there is an 
ordering and a tuning phase. These two computational blocks 
are inherited in the SONG technique, too. Because the graph 
adjacency is invariant during the run (the connections of the 
neurons are fixed), the adjacency matrix can be created prior the 
iterations (Barsi 2003a). 
The calculations have been accelerated applying buffering 
around the neurons. Such buffers are to be interpreted as 
boundaries for the potential data points, so the distance 
computations and therefore the whole processor load can be 
limited. Figure 1 illustrates the generalized adjacency in gray 
shades and the buffers in case of a letter figured graph. 
  
154 
05} 
  
  
  
  
Figure 1. Neuron graph with buffers, where the neurons are 
colored by the generalized adjacency values 
The described SONG model is based on the adjacency 
information. During the developments an other type has been 
created, which is based on the distance between the neurons. In 
the distance SONG model the generalized distance matrix D" is 
used instead of A" , but this second algorithm is significantly 
slower because of the lack of the invariance of the distance 
matrix: the neuron weights (positions) are permanently 
changing in every iteration steps, so the distance must be 
modified (Barsi 2004). This results that the distance SONG 
method can only be applied in refinement of the positions of the 
adjacency SONG algorithm. 
4. RESULTS 
The application of the SONG technique for image analyzing 
tasks is discussed in three types of examples. The first example 
is a template matching use, where the previously given fiducial 
structure has to be matched. The complexity of the graphs is 
shown in the second group of experiments, where a given 
building structure has to be found and the structure of a 
complex building has to describe in form of a neuron graph. The 
last tests implement a running window-type application of the 
SONG algorithm: segments of a road network are detected by a 
kernel graph. 
    
   
   
   
  
   
   
  
  
  
  
    
  
  
  
    
    
     
   
   
   
  
  
  
  
   
   
  
  
   
  
  
  
  
    
  
  
  
   
  
  
  
    
  
   
   
    
   
      
   
   
  
  
   
   
   
   
     
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.