1
VIRTUAL REALITY (VR) APPLIED TO ENVIRONMENT REPRESENTATIONS:
SOME EXAMPLES AT UNIVERSITY OF PADUA (ITALY)
SPAIN
SWEDEN
SWITZERLAND
SWITZERLAND
SWITZERLAND
SWITZERLAND
TAIWAN
TUNISIA
TURKEY
TURKEY
TURKEY
UKRAINE
UKRAINE
UKRAINE
UKRAINE
UNITED KINGDOM
UNITED KINGDOM
UNITED KINGDOM
U.S.A.
U.S.A.
U.S.A.
U.S.A.
U.S.A.
U.S.A.
U.S.A.
VENEZUELA
ZAMBIA
YUGOSLAVIA
V. Achilli, F. Barison
Dip. Costruzioni e Trasporti - Università di Padova - Italy
Via Marzolo 9-35100 Padova
Phone +39-049-8275584 fax +39-049-8275582
e-mail: lrg@uxl.unipd.it
A. Vettore
Dip. Territorio TESAF - Università’ di Padova - Italy
AGRIPOLIS - Statale Romea 16 - 35020 Legnaro (Padova)
Phone +39-049-8275580 fax +39-049-8272713
e-mail: vettoan@uxl.unipd.it
Commission VI, Working group 3
ABSTRACT
Lately, in the field of the architectural survey, there is the need to have a more and more complete representation of the scene or of
the surveying object. Particular efforts are made to integrate the photogrammetry survey with representations of virtual visits.
One of the more appreciated commercial systems that allows to visit environments interactively, i.e. their inside and their objects,
providing high quality images regardless of the complexity of the scene, is the " Quick Time VR". In this paper we present in detail
the application of that interactive system to the virtual visit of the "Anatomical Theatre" and of the "Ancient Courtyard" at the
University of Padova, providing also the theoretical basis of the algorithms employed for the reconstruction of the visited
environments from a set of digital images.
1. THEORETICAL BACKGROUNDS
In absence of information about the position of the camera in
the scene, we could assume that the reference system (0,X,
Y,Z) coincides with that of the camera, namely the optic centre
of the camera is the origin of this reference system (see Fig. 1).
For simplicity, we consider a pinhole camera model, neglecting
therefore the geometrical distortions introduced by camera len
ses. After turned into the homogeneous coordinates, the tran
sformation matrix, by which the points lying in 3D space are
projected onto the image plane, results as follows:
Instead of consider the object stationary and the image plane
that moves according to the position assumed by the camera, it
is useful to consider the camera fixed and apply a series of rigid
movements to the object itself in such a way to make visible
any object point on the plane image. The resulted image is the
same that we would get if we turn the camera around the object.
Changing previous equation, a point p = (x,y,z,l) of the 3D
space in homogeneous coordinates, can be represented in the
image plane by point u = w'), still in homogeneous
coor-dinates, as follows:
u -V ■ E- p = P ■ p (2)
where P is the matrix related to the transformation introduced
by the camera.
In this way for the first taken image we could assume that
E = I and p = V .
Setting u = P-p, where u = (x',y',l) represents the point
p = (x,y,z,l) projected on the plane image, we can recover
p from u in the following way:
P = M'u + W-m (3)
where m 1 is the void space that is Pm = 0
M'=P T {P-P T )~‘ (4)
f 1
0
0
(P
[v o]=
0
1
0
0
0
1 If
0,
(1)
More precisely, the u point of the image plane is the inter
section between the optical ray OP and the plane of equation
z=f, where/is the effective focal length of the camera.
In our case the matrix M' relative to the first image can be
calculated as follows:
1 Since the P matrix has rank 3, it must have a void space joined
to her. Furthermore it should be noted that all the points of the
3D space, lying on the same optical ray passing through the
origin of the reference system, are projected in the same point
of the image plane, therefore this explains the existence of a
monodimensional space associated with point p.