Full text: XVIIIth Congress (Part B2)

1e pro- 
th the 
rences 
ed. 
e ima- 
The 
f there 
'rge to 
to de- 
d lines 
elimi- 
orithm 
values 
m the 
Line 
given. 
ssi. 
ements 
d M is 
is mat- 
find a 
or a set 
|l pos- 
ts in Q 
tibility 
restric- 
1atched 
zk 
onment 
st. For 
Q) that 
he cost 
ble cost 
We as- 
‚wo fac- 
ts Kg. 
jor 
a 8. To 
l-bound 
for.io 1, Mi Optimal (= eo 
OL 
for all 2; in the list 
if 2; and C; are incompatible 
continue 
Ki = S ren; Kn(Cx, C.) + K A(C;) 
Ko = Ki K(Q;) 
if Ka > p- Optimal(|Q; | + 1) 
continue 
if Optimal(|Q; | + 1) > K» 
Optimal(|;| + 1) = X» 
create new list element 0; = 0; t C; 
K(€;) = Ka, insert €; into list 
Figure 2: Branch-and-bound algorithm for the deter- 
mination of optimal assignment sets. 
The algorithm is described in detail, a pseudo code 
version is shown in figure 2. An array called ’Op- 
timal' always contains the minimal costs for a given 
number of chosen assignments. A chained list contains 
all possible assignments regarded. The list is ordered 
from small to high costs, at the beginning the list ist 
empty. In the outer loop all assignments C; are regar- 
ded. We try to add them to every assignment set ©; 
yet in the list. If they are not compatible the proce- 
dure is continued with the next Q;. We do the same if 
the new generated (2; — Qj + C; produces costs much 
higher than the optimal costs for that number of as- 
signments. The generated new list element is inserted 
into the ordered list. 
When all assignments C; are treated, we can search 
the optimal assignment sets in the list. As the list 
is ordered by the costs, we go through the list and 
search for the first occurence of assignment sets €); 
wit iQ: mk Rl... 
The presented algorithm examines only the best 
and only compatible assignment sets. À new assign- 
ment set is created if it is compatible and not too far 
from the best assignment set known at that time. The 
parameter p determines which solutions are conside- 
red. For p — 1 only solutions better than the actually 
best solution are accepted. For p — oo all solutions 
are accapted. Currently we use the parameter value 
p = 3 which has proved to be appropriate. 
The algorithm furnishes a list of optimal assignment 
sets. For every number of matches we get the optimal 
assignment set. The last element is the combination 
with the maximal number of compatible matches pos- 
sible. 
6.2 Spezification of attribute and rela- 
tional costs 
'The attributes and relations contain information that 
should be used for the matching. It is assumed that 
all attributes contain the same amount of informa- 
29 
image 1 image 2 image 3 
Figure 3: 3 images: 3 matches are found, but one 
match (3',3") 1s only found in two images 
tion. Let's assume to have an attribute in two images 
a) and a‘?). The attributes a are equally distributed 
over a certain domain. We assume that the attribute 
difference a“) — a(2, however, is normally distribu- 
ted with a certain variance c2 and zero mean if two 
features correspond. 
A possible cost function K4(C;) is the squared sum 
of attribute differences. When there 1s more than one 
attribute the squared differences have to be weighted. 
Deviding the squared differences by the variances re- 
sults in a sum of normally distributed random varia- 
bles with unary variance — provided that the match is 
correct. We denote the ith attribute in the nth image 
a and the mean attribute a;. Thus for n attributes 
in M images we get: 
1 M 
> (m) €! E 
G m S^ d; i=1 n 5) 
M m=1 
n M (m) e NI) 
trs a,’ —9:) . 
Ka(C) = Le 3 i) (6) 
i=lm=l a 
Here c? is the variance of the ith attribute difference. 
As KA(C) is a sum of squared normal distributed ran- 
dom variables, it is y“(n M)-distributed. Its mean va- 
lues E(K4(C)) is nM. 
Often there 1s the case that one match does not 
exist in all images (see figure 3). In (6) we sum only 
over the images where the regarded feature exists and 
add n to compensate for this. The consequence of this 
approach is that we do not punish nor honour missing 
correspondences in one image. 
To determine the relational costs K g(C;, C;) we pro- 
ceed in a similar way. Again we assume that the dif- 
ferences of relation values are normally distributed. 
The formula for Kr(C;,€;) is shown for only one 
relation, the case of more relations is straight forward. 
We denote pe the relational value between feature : 
and feature j in the mth image. Again we calculate 
the mean value r;; and from this Kr(C;, C;): 
1 M 
E: (m) , 7 
PF; = 7 rij bid. nua) 
m=1 
M (m) 2 
(rt — 745)” 
Kr(Ci, Cj) 5j 
d 
Q 
0 
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B2. Vienna 1996 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.