Full text: Proceedings, XXth congress (Part 3)

     
   
   
  
   
   
   
   
  
  
  
  
   
  
  
  
  
  
  
  
  
   
   
   
  
  
  
  
   
   
  
   
  
  
   
  
    
   
  
  
  
  
    
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B3. Istanbul 2004 
  
Input Data ^ Imase Sequence - —— ——— Lr 
A 7i. 
3D "CAD-Model" 
of Planar Surfaces 
Corresponding 
RR 
  
  
  
Self calibration 
Calibrated Cameras 
of Pairs 
! 
[ ; 
| Rectification 
I 
I 
Epipolar Geometry 
r B» Guided 
Correlation 
Dense Disparity Maps 
Depth Maps 
I 
I 
I 
I 
I 
Image Pairs in 
I 
I 
I 
Elevation Grid ; 
| 
| 
| 
I Creation of 
l 
| 
| 
I 
Texturing of : 
the Relief 
Textured Elevation Grid 
Figure 2: Overview of the processing chain from a coarse 
to a refined model. 
very generic structural models (e. g. several smooth planes 
at different depth levels) are required. On the other hand, 
different objects or parts of them are not separated by the 
model and can not be treated individually by the visualiza- 
tion process. 
The other approach is to use models of objects which are 
expected to be present in the scene like e. g. buildings in 
an urban environment. This typically results in scene ob- 
jects represented by polyhedral models made from plane 
surfaces. For visualization, areas of the original images are 
extracted and mapped as flat textures on each of the sur- 
faces. The advantage is that the objects are already hierar- 
chically structured entities that are easy to handle. But re- 
lief structures within each surface are lost if they are — and 
this is the general case — not contained in the model. 
Both methods produce models which are less realistic on 
close inspection and very oblique views because no relief 
structure is visible. In our approach, a hybrid method that 
combines the advantages of both reconstruction techniques 
in a hierarchical manner is proposed (Fig. 1). Polygonal 
boundaries of planar surfaces are taken as input and con- 
strain the reconstruction of the finer details which are re- 
trieved via dense matching. The result is a dense elevation 
grid that can be used to replace the flat texture. Because the 
elevation grid is restricted geometrically to the polyhedral 
model at its borders, it fits to other surfaces without any 
error. 
2 PROCESSING CHAIN 
In this section the processing chain from input data to the 
final product is described. Refer to Fig. 2 for an overview. 
2.1 Input Data 
The inputs of the processing chain are an initial coarse ge- 
ometric model of the object and a set of associated images 
mapping the building. The coarse model is a wire frame 
model of the building, in which planar surfaces are de- 
scribed by 3D outline polygons. The images are needed for 
both relief reconstruction and texturing. They are linked to 
the polygons through known image coordinates of the ver- 
tices. The process needs the determination of the camera 
parameters by using the 3D coordiates. Therefore, it must 
be assured that the 3D points are not coplanar in space so 
that camera parameters can be computed for the 3D re- 
construction. This is not the case for e. g. a single facade, 
but can be circumvented by inclusion of some additional 
points outside the plane. Here are two possible methods to 
obtain suitable input models: 
2.1.1 Retrieve model from images For a site where 
only images exist, i.e. where no model has been gener- 
ated or made available, it is possible to retrieve the wire 
frame model from the images. Corresponding points in dif- 
ferent images generally provide enough information to cal- 
ibrate the cameras and to reconstruct the 3D coordinates of 
the imaged points. Next, either the 3D bounding lines of 
the surfaces are extracted and intersected or the corners of 
these surfaces are connected in order to create the poly- 
gons. 
2.1.2 Coregistrate existing model to images We have 
assumed that there already exists a wire frame model. 
Here, the 3D points are already given and only have to 
be marked in the images so that the corresponding image 
coordinates are available. No knowledge about the cam- 
era parameters is needed as input, but for later self calibra: 
tion it should be known which of the parameters (e. g. focal 
length) can be considered constant. 
2.2 Self calibration 
The estimation of a depth map requires knowledge about 
the pose of the cameras as well as their calibration param- 
eters. If these are not known they have to be computed 
from the given point assignments. Such a task — simulta- 
neous computation of inner and outer camera parameters 
when no initial values are known — is commonly referred to 
as auto or self calibration (Hartley and Zisserman, 2000). 
In the case that corresponding 2D and 3D points are avail- 
able, the following two step strategy could be applied: 
(a) Linear Resection Resection computes the homoge- 
neous 3x4 projection matrix P from corresponding image 
points x; and world points X; which are related by 
x; = PX; (1) 
Using the vector cross product 
x; x PX; =0 (2) 
Eq. 1 can be transformed into an equivalent equation 
Ap =0 (3) 
   
Inter 
— 
  
Figi 
out] 
whe 
the 
tor. 
(Ha 
An 
ima 
pare 
are 
are 
resu 
a lii 
bee 
torti 
erro 
rica 
(b) 
bun 
equ: 
tern 
are 
The 
kno 
sum 
sure 
The 
ters 
rela 
the : 
23 
Rec 
sucl 
and 
COO! 
ing 
the 
slan
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.