Full text: XVIIIth Congress (Part B2)

  
4.1 Self-organizing feature map 
Kohonen Self-Organizing feature Map (SOM) is neural 
network which is trained using competitive learning. 
Basic competitive learning means that competition 
process takes place before each cycle of learning. 
Competition process means that a winning processing 
element is chosen by some criteria. After the winning 
processing element is chosen, its weight vector is 
adapted according to the used learning law (Hecht- 
Nielsen, 1990). 
SOM creates topologically ordered mappings between 
input data and processing elements of the map. 
Topologically ordered means that if two inputs are 
similar, then the most active processing elements 
responding to inputs are located near each other in the 
map and the weight vectors of the processing elements 
are arranged to ascending or descending order, w; < w;,, 
all i or w; » w;,, all i (this definition is valid for 1- 
dimensional SOM). Motivation behind SOM is that some 
sensory processing areas of brain are ordered in similar 
way (Kangas 1994). 
SOM is usually represented as a two dimensional matrix 
(also other dimensions can be used) of processing 
elements. Each processing element has its own weight 
vector and learning of SOM is based on the adaptation 
of these weight vectors (Kohonen, 1990). 
The processing elements of the network are made 
competitive in a self-organizing process and the winning 
processing element whose weights are updated is chosen 
by some criteria. Usually this criteria is to minimize 
Euclidean distance between input vector and weight 
vector. SOM differs from basic competitive learning so 
that instead of adapting only the weight vector of the 
winning processing element also weight vectors of 
neighboring processing elements are adapted. First, the 
size of the neighborhood is large making rough ordering 
of SOM possible and size is decreased as time goes on. 
Finally, in the end only a winning processing element is 
adapted making the fine tuning of SOM possible. The 
use of neighborhood makes topologically ordering process 
possible dnd together with competitive learning makes 
process nonlinear (Kohonen, 1990). 
The basic idea is that the weight vectors of the 
processing elements approximate the probability density 
function of the input vectors. In other words, there are 
many weight vectors close to each other in high density 
areas of the density function and less weight vectors in 
low density areas. 
Mathematically speaking, SOM learns a continuous 
topological mapping f: B c R^ — C c R". This is 
nonlinear mapping from d-dimensional space of input 
vectors to m-dimensional space of SOM. Strict 
mathematical analysis exists only in simplified cases of 
SOM. It has been proved difficult to express the dynamic 
properties of SOM to mathematical theorems (Kohonen, 
1990). 
376 
4.2 SOM learning algorithm 
1. Initialize weights to small random values. 
Choose input randomly from dataset. 
3. Compute Euclidean distance to all 
elements. 
4. Select winning processing element 7 with minimum 
distance. Winning processing element is also called 
best matching unit (BMU). 
5. Update weight vectors to processing element j and its 
neighbors using following learning law. The learning 
law moves weight vector toward input vector. 
m 
processing 
wit+1) - w, * a(Q(x() - wf), (8) 
where gain term ao (0<o<1) decreases in time. Also, 
size of neighborhood decreases in time (only those 
weight vectors of processing elements are updated, 
which belong to the neighborhood). Here processing 
element belongs to the neighborhood, if d.(j,1)<T, 
where d, is the Chebyshev distance, j is the winning 
processing element, ; is another processing element 
and T is the threshold which decreases in time. 
6. Go to step 2 or stop iteration when enough inputs are 
presented. (Lippmann, 1987) 
4.3 SOM in feature extraction 
SOM is usually arranged as a two dimensional matrix 
(also other dimensions can be used) of processing 
elements. As a result of learning phase, those processing 
elements which are spatially close to each other respond 
in similar way to the presented input pattern. In other 
words, map is topologically ordered. Also, SOM makes 
nonlinear transformation from d-dimensional inputspace 
to m-dimensional mapspace. Mapspace is defined by he 
coordinates of the processing elements. All these 
properties are useful in feature extraction. 
In feature extraction, original feature vector is presented 
to SOM and its winning processing element and its 
mapcoordinates are searched. These mapcoordinates 
could be used as a transformed features, but usually 
there is limited number of processing elements and 
many different inputvectors get same coordinates. This 
means that if the density function of inputvectors is 
continuous, the density function of transformed vectors 
is not continuous. 
Better way to make transformation is to use distances 
computed during the search of the winning processing 
element. There are two alternatives: 
A. Weighted mean of mapcoordinates are computed 
using inverse distances from  inputvector to 
weightvectors as weights. These mean values are 
used as a transformed vector. 
B. The coordinates of BMU are searched and distances 
computed from inputvector to BMU (d,) and the 
second closest weightvectors (dy) in row and column 
direction. Transformed value is coordinate of BMU = 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B2. Vienna 1996 
- 
Ted aa had jf — A» 
bd gl my 
rm 
ect IN cc (50 e+ c .5 
ch et ON Lu
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.