unknown parameters, c the vector of fixed parameters and let b
be the vector of observations with their corresponding vector of
errors v, then the relation between these parameters can be
written as:
f b,c) » fb V,c) 20 (1)
The unknown parameters are marked by an hat "^ " the
observations affected by normal distributed noise are marked by
a tilde " . In this case the equations in (1) build an over-
determined non-linear system. The vector of relations f
connecting the parameters have to be differentiated by the
unknown and the observed parameters to apply the Newton
method to find the minimum of:
AN VK I
r (2)
Where integer r is called redundancy, o? is called unit weight
error and K is the variance-covariance matrix of the
Observations.
3.4 Mathematical model
In this chapter the different mathematical models for augmented
reality calibration are discussed.
The transformations are expressed in four-by-four
transformation matrices. The transformation from the world co-
ordinate system to the display-system can be written as:
Display || yy Display Eyesystemrp Sensor r1 Source
T ou T Pr yesystem Tes Tm Tor G)
In figure 5 the different co-ordinate systems are sketched. The
world co-ordinate system and the source co-ordinate system
coincide in this picture (1). The origin of the eye co-ordinate (2)
system is at the position of the observers eye. The sensor co-
ordinate (3) system is attached to the glasses and has for this
reason a fixed relation to the display-co-ordinate system (5).
Besides to the intrinsic parameters of the optical system the
transformation from sensor to eye co-ordinates (4) defines one
set of calibration parameters.
Figure 5. Sketch illustrating the involved co-ordinate systems.
Using formula (1) the projection of the point X can be written
as:
Y= (x y Zz w) = T E (4)
As here perspective projection is occurring the perspective
division (pD) has to be applied:
,
ve(x/w ylw z/w) = pDu) (5)
If more than one sensor is available and attached to the glasses,
the constant connection between different sensors can be
Written as:
= (T SensorB ) = (T SensorB q Sonora )
—N ^ Source 7M SensorA ^ Source
(6)
Where E is the function that decomposes a four-by-four
transformation matrix into the rotation angles and the
components of translation.
The above representations can be simplified if the equations
(3)-(6) are reduced in their representation to the parameters and
their relations. Four types of parameters can be distinguished:
the image point, the control point, the parameters of an
Euclidean transformation and the parameters of the projective
transformation. The vector of parameters of an "Euclidean
transformation" is abbreviated by an e and the vector of
parameters of a "projective transformation" by a p. Using these
abbreviations the equations are written as homogeneous
equations, that means the right side of the equations equals
zero. The type of concatenation of the parameters is in the
following not important, therefore the concatenation is
symbolised using "o" A further abbreviation is that instead of
the full name of the co-ordinate system the first two letters are
used. The equations (4) and (6) can be reformulated as:
Vo: e Pzypi 9 € seEy 9 € soSe 9 €yoso 9 Xy, = 0 (7)
€ SoSeB 9 SeASeB o € SoSeA = 0
Each equation represent a single image point measurement. All
vectors of parameters (v, p, e, x) build groups that belong
together. For each image point their may be groups of
parameters that vary, and some groups may remain the same.
The following main 5 different models are distinguished to
explain the weaknesses and strength of different approaches:
1. Janin [7] transforms the co-ordinates of the control points
into the sensor system. The errors of the sensors are not
directly taken into account. The measurement of a point
can be given here as:
Vp; 9 Dgyp; 9 Ésp, 9 €s,s, 9 oso 9 Xy, = 0 (8)
2. Analogous to the preceding approach, Tuceryan's
technique [11] does not assume a fixed observers head.
The co-ordinates of the control points are also given in the
sensor system. No sensor errors are taken into account.
The transformation from the sensor in the display system is
combined in a 11 parameter transformation (also known as
"direct linear transformation (DLT)"). The first iteration of
the DLT is linear. That's the reason why it is often used to
get approximate values for non-linear approaches. As
concatenations it can be formulated as:
Vp; ? Psepi 9 Csose ° Ewoso 9 Xy, = 0 (9)
—536—
€ n
MM CS LJ
Prey (pt ppd pd pp 0 m
M M