Full text: XVIIth ISPRS Congress (Part B3)

for 
[ is 
m- 
the 
(7) 
has 
]ti- 
om 
Or 
est 
age 
of 
's), 
le- 
nes 
hy. 
nd 
on- 
lon 
en 
1a 
ne 
cf. 
on 
ne 
a- 
ext 
n- 
s1- 
te 
ed 
  
Figure 5: À supporting line context 
  
Figure 6: A novel neural network for line grouping 
in Figure 2, the performance of proximity grouping 
depends on the spatial location of pixels. Those pi- 
xels are grouped which are closer and lie on the line. 
Connectivity is another important law for grouping 
and its performance depends also on the spatial lo- 
cation of pixels. Those pixels are grouped which are 
connected to each other on a straight line. On the 
contrary, the performance of similarity depends on lo- 
cal radiometric properties of pixels. Those pixels are 
grouped which are similar, for instance, in intensity, 
gradient magnitude and orientation. 
Now, the main goal is to integrate these grouping cri- 
teria for an effective implementation and to combine 
the results when different criteria give different re- 
sults. For this purpose, a novel neural network has 
been developed. As showed in Figure 6, the network 
has four layers denoted by F;,i — 1,2,3,4. The layer 
F; has M neurons and it receives the input vector 
I, — (0i, gi, ...) containing gradient orientation 6;, gra- 
dient magnitude g;, and other local radiometric pro- 
869 
perties of the i'^ pixel. I; — (2, yi) is the second input 
vector containing the coordinates of the same pixel 
and it is presented to the layer Fy. The layer Fy is the 
output layer containing N neurons and F3 is a hidden 
layer which contains also N neurons. These layers are 
connected by the weight a;;,i =1,...,M,j = i... N, 
between Fy and Fy, b;;,i=1,2,j = 1, ..., N, between 
Fy and F3, and cj, — 1,..., N, between F3 and Fy. 
To train this network to aggregate pixels into line sup- 
port regions, we apply all pixels in an image whose 
gradient magnitudes are greater than a threshold as 
input data. For simplicity, let I only contain the gra- 
dient orientation 6; of the it? pixel. For the first inputs 
Ii = (61) and I; = (21,31), the first node v; at Fy 
is chosen and the weights which are referred to the 
direct and indirect connections between vi and other 
nodes at F;, F, and F3 are adapted by the rules 
411 — 01, C1 = x1b11 + y1ba1, 
b11 = cos a11, ba, = sina11, (8) 
and the adapting number n, of v is set to 1. For the 
t'^ inputs Ij — (6,) and I — (z,, y,), the input to the 
it" node of Fs equals s; — zjbi; -- yb»; — 1,.., N 
which is, for the sake of convenience, also its output 
signal. It is clear that s; is just a matching score for 
the similarity between inputs and stored weights. For 
the i'^ node of F4, the situation is more complex. It 
has the input 6; — aj; from F, and the input s; — cj 
from Fa, i = 1,..., N. Both inputs can be normalized 
by using 
= (0: — a1:)” = (5; — 6)? 
Pai = exp [Eee y Pei = exp mL , 
(9) 
where c4 and c, are tow normalizing constants, and 
P4; and P; can be thought of as two matching scores 
for the two inputs from F, and F3. P; gives a mea- 
sure to the performance of similarity grouping, while 
Pei gives a measure to the performance of proximity 
grouping. Now, all nodes of F4 compete to recognize 
features in the input layers. Here the main question 
is how to measure the match of the i*^ node of Fy 
using a matching score P;. This is a problem of dra- 
wing inference based on P,; and P,;. When calculated 
using probability theory, the matching score P; can be 
derived based on Bayes’ rule: 
P; = P(Pai, Pet) = P(Pai | Pet) P(Pet) (10) 
Based on fuzzy logic, the matching score P; can be 
calculated as follows: 
P; = P(Pai, Pas) = min(Pai, Po). (11) 
Now, for the t'^ pixel with the inputs I; = (01) and 
I, — (z:, y), only those nodes of F4, which have been 
triggered by the neighbor pixels of the t'^ pixel du- 
ring the last learning, compete with each other. After 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.