Full text: Proceedings, XXth congress (Part 3)

In the following a framework is presented, that allows to 
do matching of points and lines across multiple oriented 
images in a unified manner, by proposing the use of spatial 
filters that do not operate in the image domain, but use the 
known orientation to operate in the scene domain at the 
earliest possible stage. In doing so, not only unification is 
achieved, but also the effects of using radiometric image 
information are isolated, allowing more control over the 
use of intensity information in the matching process. Even 
the possibility of not using intensity information at all is 
given, allowing the fully automatic reconstruction based on 
geometric information alone as it is required in the scene 
depicted in figure 1. 
In order to exploit the full geometric knowledge provided 
by the feature extraction, the statistical properties of the 
extracted features are used throughout the whole match- 
ing process, enabling the construction and operation on 
graphs, that represent the statistical relations between the 
objects. 
2 FEATURE EXTRACTION 
A prerequisite for feature matching is the extraction. The 
task of feature extraction from single images is well under- 
stood and many approaches are available (c.f. (Förstner, 
1994), (Smith and Brady, 1997), (C.G. Harris, 1988), (Canny, 
1986)). Even the statistical properties of the extracted fea- 
tures, i.e. points and lines, are obtainable as presented in 
(Fórstner, 1994). If you know the exterior and interior ori- 
entation of the camera used, the uncertain projecting ray 
for every point and the uncertain projecting plane for ev- 
ery line segment can be computed according to (Heuel and 
Fórstner, 2001). If you also know a lower and an upper 
bound on the distance of the depicted object from the cam- 
era, which is very simple accessible in many applications 
including aerial imagery, the locus of an image point x in 
space is a space line segment s together with its uncertainty 
25s and the locus of an image line segment / in space is a 
space quad q together with its uncertainty Mq. Thus fea- 
ture extraction in oriented images yields not only a set of 
image features together with a reference to the generating 
image 
Ipg == {(m HL) == 1..N) U ft; 1 = 1.M} 
but also a set of space objects together with their uncertain- 
ties 
Spp m (0 li= 1. NU, 5, Wi=1.M] 
Note, that there is a one-to-one mapping 
srg: Srp — Ig 
between the two sets, associating each space object with 
its generating image object. 
3 SPATIAL FILTERING 
In this framework all processing is done by filtering ob- 
jects in the spatial domain. This means, that starting from 
1110 
    
   
   
  
   
    
    
    
    
   
    
    
  
  
    
    
   
    
    
   
   
  
  
   
     
    
     
    
   
    
      
   
   
  
   
   
   
   
   
     
   
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
a set of space objects obtained from the set of images as 
described in the feature extraction section above, different 
filters are applied yielding increasingly complex space ob- 
jects. More precisely a spatial filter is an algorithm 
1:22 
that takes a number of space objects as input and generates 
some different space objects as output. Again a mapping 
$038 927 
can be provided, that associates every space object with the 
source space objects, that were used in the filter to generate 
it. Therefore every application of a spatial filter generates 
one more level in a source tree of the space objects. Two 
filters are proposed to yield the matches of points and lines 
over multiple views. 
3.1 Pairwise Grouping 
The first step is a pairwise matching of the objects. In order 
to do this, a graph 
Gra = (Sre. Epa) 
induced by the statistical incidence relation (c.f. (Fôrstner 
and Heuel, 2000)) is constructed. The vertices of that graph 
are the space objects and an edge is inserted between two 
vertices p and q, if and only if there is no reason to reject 
the statistical hypothesis, that the space objects p and q in- 
tersect each other. The edge set is thus denoted by 
Epa = l(p.q)|p. a € Syg ^ intersect(p,q)) 
Every edge in this graph represents a possible match be- 
tween two image objects, that is not contradictory to the 
scene geometry. If the image intensity information is to 
be included in the algorithm, an intensity based distance 
measure 
d: Ipg X lrg — R 
must be introduced and those graph edges have to be pruned, 
that do not comply with the distance measure, i.e. the edge 
set is adjusted using an intensity distance threshold T'as 
follows 
Epa = {(p,q) € Epc|d(sr.(p), spg(q)) « T] 
Most matching techniques, including the classical corre- 
lation based and least squares approaches, are focused on 
the development of powerful and robust intensity distance 
measures (c.f. (Schmid and Mohr, 1997) and (Schmid and 
Zisserman, 1997)). 
As pointed out in the introduction, there are certain con- 
ditions, that do not allow any pruning at this stage. Since 
no possible matches should be lost at this early stage of 
processing, the full edge set is used here and no pruning 
is performed. The resulting filtered set is thus obtained by 
taking every edge of G pc; and constructing the intersecting 
object from its end-vertices space objects. Thus the filter 
returns the set 
SPG = {(c(p, q), De P. q) € Epa} 
Internatio 
where c! 
objects « 
scribed i 
simply a 
3.2 Mi 
The sec: 
aggregat 
images. 
induced 
and Heu 
are agai 
serted b 
no reasc 
objects 
It follow 
share a 
the mul 
of imag 
servatio 
graph. 
C = 
is com[ 
maximi 
though 
known 
the gra 
ponents 
lines fil 
comput 
solutior 
exampl 
fects 01 
is beyo 
reconst 
may be 
partitio 
for son 
resultir 
where 
metric 
in (He 
previo 
  
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.