Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision*, Graz, 2002 
  
virtual view point 
     
virtual image plane 
Figure 5: Drawing triangles of neighboring projected cam- 
era centers and approximating geometry by one plane for 
the whole scene, for one camera triple or by several planes 
for one camera triple. 
depth value for the projection of the virtual viewpoint in 
the depth map correspondiong to each vertex. These points 
can be interpreted as the intersections of the lines connect- 
ing the virtual viewpoint and the real viewpoints with the 
scene geometry. Knowing the 3D coordinates of triangle 
corners, we can define a plane through them and apply the 
same rendering technique as described above. 
Finally, if the triangles exceed a given size, they can be 
subdivided into four sub-triangles. For each of these sub- 
triangles, a separate approximative plane is calculated in 
the above manner. Of course, further subdivision can be 
done in the same way to improve accuracy. Especially, if 
just a few triangles contribute to a single virtual view, this 
subdivision is generally necessary. It should be done in a 
resolution according to performance demands and to the 
complexity of the geometry. Rendering can be performed 
in real-time using alpha blending and texture mapping fa- 
cilities of todays graphics hardware. More details on this 
approach can be found in (Koch et al., 1999, Heigl et al., 
1999, Koch et al., 2001). A similar approach was presented 
recently (Buehler et al., 2001). 
We have tested our approaches with an image sequence 
of 187 images showing an office scene. Figure 6 (top- 
left) shows one particular image. A digital consumer video 
camera (Sony TRV-900) was swept freely over a cluttered 
scene on a desk, covering a viewing surface of about 1m?. 
Figure 6 (top-right) shows the calibration result. Result 
of a rendered view are shown in the middle of the figure. 
The image on the left is rendered with a planar approxima- 
tion while the image on the right was generated with two 
levels of subdivision. Note that some ghosting artefacts 
are visible for the planar approximation, but not for the 
more detailed approximation. It is also interesting to note 
that most ghosting occures in the vertical direction because 
the inter-camera distance is much larger in this direction. 
In the lower part of Figure 6 a detail of a view is shown 
for the different methods. In the case of one global plane 
(left image), the reconstruction is sharp where the approxi- 
A - 256 
  
Figure 6: Unstructured lightfield rendering: image from 
the original sequence (top-left), recovered structure and 
motion (top-right), novel views generated for planar 
(bottom-left) and view-dependent (bottom-right) geomet- 
ric approximation. 
mating plane intersects the actual scene geometry. The re- 
construction is blurred where the scene geometry diverges 
from this plane. In the case of local planes (middle image), 
at the corners of the triangles the reconstruction is almost 
sharp, because there the scene geometry is considered di- 
rectly. Within a triangle, ghosting artifacts occur where the 
scene geometry diverges from the particular local plane. If 
these triangles are subdivided (right image) these artifacts 
are reduced further. 
3 CONCLUSION 
In this paper an automatic approach was presented that 
takes a video sequence as input and computes a 3D model 
as output. By combining state-of-the-art approaches de- 
veloped in the field of computer vision, computer graph- 
ics and photogrammetry, our system is able to obtain good 
quality results on video as well as on photographic mate- 
rial. 
ACKNOWLEDGEMENTS 
The authors are grateful to Marc Waelkens and his team 
for making the archaeological material accessible to them. 
Part of this work was carried out in collaboration with Rein- 
hard Koch and Benno Heigl. The financial support of the 
FWO project G.0223.01, the IST projects INVIEW, AT- 
TEST and 3DMurale are also gratefully acknowlegded. Kurt 
Cornelis is a research assistant of the Fund for Scientific 
Research - Flanders (Belgium). 
REFERENCES 
Beardsley, P., Zisserman, A., Murray, D., 1997. Sequential 
Updating of Projective and Affine Structure from Motion, 
International Journal of Computer Vision 23(3), pp. 235- 
259.
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.