Full text: XVIIth ISPRS Congress (Part B3)

  
  
competition, the k** node of F4 which has the largest 
matching score P, is chosen. If P, is greater than 
a vigilance threshold o, the winning node v, of F4 
then triggers associative pattern learning within the 
weights which sent inputs to this node. The learning 
rules are written as follows: 
  
  
  
dai ! 
= —— (0: — 
dt ny t il a) 
de, z bis * bar * 
p T Ei à 
7 = 1, bip = cosayx, ban = sinayx, (12) 
where the coordinates (x*, y*) can be computed using 
HEE SIE 
y | L| ux b2, yt ckboy |’ 
(13) 
and their meaning will be given later. Otherwise, if 
Py is less than e, a previously uncommitted node v; 
of F4, whose adapting number is zero, is selected. Its 
weights are adapted according to the following rules: 
a1; = 0,, cj — zibij 4 yiboj, 
bi; = Cos a,j, ba; = sin aij, (14) 
and n; is set to 1. 
The learning process just stated repeats itself automa- 
tically at a very fast rate till each pixel in the image is 
presented to the net more than one time. After that, 
the net can group pixels into line support regions. 
Figure 5 shows, for instance, a typical line support 
region containing those pixels which trigger the acti- 
vity of the same node v, at F4. All of these pixels have 
a similar gradient orientation and lie close to a hypo- 
thetic straight line which has been learned by the net 
during the learning process and can be represented 
by using the equation 
f cosa, 4- ysinaj, — c, = 0, (15) 
where a, and c, are stored in the so called long-term 
memory storage (adaptive weights) of the net. Now, 
we come back to the meaning of (z*, y*) (cf. (13)). It 
can be proved that (z*, y*) are just the coordinates of 
the projection of the pixel (x;, y:) onto the hypothetic 
straight line which is represented by the node v,. 
5.2 Model Driven Line Description 
After grouping process, some line support regions in 
the image are extracted and each of them may hide a 
potential line structure. How to make this line struc- 
ture explicit is thus the main issue of this section. 
A line, as mentioned above, is just a visual impres- 
sion produced by a line support region. To characte- 
rize this impression quantitatively, models for what 
870 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
name line i 
is-a line 
type I 
length 83.0 pixel 
end points | (79.2, 162.7), (109.8, 239.8) 
0 2.8 radian 
p —13.7 pixel 
€ 2.0 
œ 77.1 intensity level 
B 19.2 intensity level 
00 8.2 intensity level 
Ca 0.002 radian 
Tp 0.4 pixel 
Oc 0.1 
Ta 2.5 intensity level 
op 1.7 intensity level 
  
Table 1: A line frame 
we want to extract are required. These models can 
then be used to fit line support regions. A good fit 
suggests a good line description. So, the tasks of line 
description include model generating, parameter esti- 
mation, and quality description. 
There are many ways to generate a line model which 
can be implicit or explicit, analytical or functional. 
For the sake of convenience, we define the following 
models to describe four line types: 
Model 
: —1 
I: I(z,y)=a [1 + exp(— “cos d+ysind-p )) +8 
H: I(z,y) = aexp |-L(Ecx2tusiné ete) 4 g 
II: I(z,y) = —aexp [- 4(zeesttusing=prt a] +8 
IV: I(z,y)-o(zrcos0--ysin0 09) - 0, 8— I 
where I(x,y) denotes the intensity of a pixel (z, y), 
I denotes the average intensity of all pixels within a 
line support region, weighted by their gradient mag- 
nitude, and © = (6, p,€, a, B) is a set of parameters 
describing geometric and radiometric aspects of the 
line’s behavior. 
Given a line support region R. — I;(z;, yj), =1,...,n 
and a line model 7(z, y) 2 f(z, y, 0, p, e, a, 8), it is not 
difficult to estimate the unknown parameters f? based 
on, i.e., the least squares estimation technique. This 
technique, however, is very sensitive to the presence of 
outliers, i.e., to intensities with very large deviations 
from the underlying surface. For reducing the effect 
of outliers on the estimates, we need new methods 
known as robust estimators (HuBER, 1981). Here the 
parameters f? are estimated by minimizing a penalty 
function of the residuals ,i.e., 3^; p(r;), where r; deno- 
tes the residual. This is a minimization problem which 
can be solved as iteratively reweighted least squares 
with the definition of the weights depending on p(r;) 
et 
mo RS ede ——— — em AA ^ o fh ^^ 
(b ~~ mr (5
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.