CIPA 2003 XIX th International Symposium, 30 September - 04 October, 2003, Antalya, Turkey
295
id the
e user
nodel,
actual
only,
higher
ce the
dicate
;nt.
fluent
model
atures
it. An
to get
o this
been
results
simple
aled a
espect
lained
they
intent,
inique
ontext
inique
M
lirther
;ed on
irlying
;r and
other
vledge
:-point
¡ective
vel of
image
in be
mages
imuni-
server
g new
scene
:rmine
i view
i-l, is
~orma-
which
B. the
erence
of the
imand
pixel
the Z
4) Then, a pixel by pixel difference between view B and
the predicted view is calculated by the server. A so
called error-image is obtained, which is compressed
with LZ77 algorithm and sent to the client.
Client-side operations
1) The client receives the projective transformation
parameters and the error-image, which is decompressed,
retrieving the original image.
2) The projective transformation is applied to the view A,
generating a so called predicted view.
3) Then, the error-image is added pixel by pixel to the
predicted view, in order to create the requested view B,
which is then displayed by the client in the user’s GUI.
Figure 3: The VRML Split-browser structure
The decription presented above shows that the computational
load on the client is greatly reduced, respect with the classical
approach (transmission of the whole VRML model). Indeed,
the client is required only to apply the projective
transformation to the current view and to decompress the
error-image as fast as possible. Nowadays, requested
computational capabilities to perform such tasks are
commonly available on most home desktop PCs.
In the following subsection, some mathematics will be
provided about the developed frame compression algorithm.
4.1 The projective transformation
A projective transformation can be considered as a subset of
the more general group of coordinate transformations, which
maps a given input 2D image point x = [x b x 2 ] into a new
image point y = [y,, y 2 ]. Adopting a matrix notation, this
mapping fucntion can be defined as follows:
Ax + b
c'x + d
(1)
Typically, each projective transformation is associated a
matrix Pe 91 3x3 , called projective matrix, allowing a more
compact notation for eq. (1):
A
M
b'
a \i
0,2
w
p =
Л
Л
Л
=
a 2\
o 22
b 2
c'
M
d
_ C \
C 2
d
(2)
In the following, the procedure adopted to compute the
projective transformation parameters will be described using
a simple polygon as target object in the 3D space. This
assumption doesn’t limit the effectiveness of our procedure,
since it is well known that each 3D object can be described in
terms of a number of interconnected polygons (typically
triangles). Therefore, in principle, it would be sufficient to
apply the algorithm to each composing polygon.
According to figure 4, the aim of our procedure deals with
the computation of the position of point T 2 , by application of
a projective transformation to point T| (identified by P 3 ).
Figure 4: Geometric model for the projective transformation
If projective transformations P! and P 2) . mapping T on T| and
T 2 respectively, are known, following relationship can be
established:
T 2 =P 2 (T) = P 2 (P,,„ V (T 1 )) (3)
In practice, the projective transformation which maps the
view-plane a] (part of polygon p displayed on the user’s GUI
at time n-1) on a 2 (view of polygon p at time n) can be
defined in terms of a matrix product:
P 3 = P 2 P, ' (4)
where Pj denotes the projective matrix associated to the i-th
projective transformation. As shown by eq (4), in order to
determine P 3 , we need to compute P| and P 2 before. To this
aim, we consider firstly the parametric equations of a and P
planes (see figure 5):
where
<2-»x" = A-t"+d /? —>■ X' = A • t'+C (5)
A =
cit
Û-,, Cl-,
L“21
; b = [6, b 2 ]' ; c' = [c, c 2 ] \ d e R
where A, BeiR 3x2 ; c, d e9f 3 ; t, t”e9? 2 and x’, x”e s Jl 3 . Then
we add two constrains: a) vectors generating the planes are
orthogonal (eq. 6); b) vectors c and d, defining the distance of