Full text: Close-range imaging, long-range vision

H HEAD- 
TONS 
risruhe, 
ntrast to camera 
e point accuracy 
The orientation 
mplexity in the 
'ocess. Different 
ite the different 
ved", "fixed" or 
iss the different 
ded, that allows 
construction kit. 
only the image 
ge ray leads to 
f the stochastic 
Is can be found: 
es from 8 to 11 
tension of the 
to be only small 
s that elaborate 
ration programs 
  
lanning rescue 
ut through a 
1 by errors, one 
cal image point 
with different 
joints, tracking 
essential. The 
nplex as if head 
position and head orientation are used as observed parameters, 
because for each image point a own head position and 
orientation has to be assumed. As shown in the following 
sections, these problems can be handled by special design 
patterns used by object oriented programming. In the last 
chapter the following questions are discussed: Which 
calibration procedure is the optimal one? How can the 
calibration effort be optimised? Which calibration parameters 
are redundant? 
2. EQUIPMENT 
The results presented in this paper are produced using the 
Ascensions Flock of Bird (FOB) Tracking System and the i- 
glasses-Protec STHMD (see figure 2). The basic source co- 
ordinate system is realised by a transmitter that is building a 
magnetic field. The two sensors of the system, further also 
referred to as "birds", are used as mobile sensors that can 
compute their orientation and position from measurements of 
the magnetic field of the transmitter. In figure 2 the first bird is 
attached at the glasses. The second bird is lying on the table 
between the glasses and the transmitter. The measurements of 
the sensors are the position in the source (transmitter) co- 
ordinate system and the orientation of its co-ordinate system in 
the source co-ordinate system. 
  
birds ~~ 
    
  
  
  
  
Figure 2. The components of the studied augmented reality 
system. 
2.1 Estimation of the sensor and image point accuracy 
A fast method to approve the manufacture’s specification [1] of 
a sensors accuracy is to compare it with other sensors of 
superior accuracy [6],[8],[10]. For this study there was not a 
device of superior accuracy available. For that reason the 
accuracy was estimated by experiments. The accuracy of the 
position measurements was estimated by forcing the centre of 
the sensor to lie physically on a sphere. The centre of the sphere 
and its radius have been estimated as unknown parameters, the 
position measurements were used as observations. The mean 
errors of the measured position was estimated separately in 
various distances to the transmitter. The results are given in 
figure (3). The dispersion in the right upper corner is a result of 
gross errors. 
For the estimation of the accuracy of the measured angles, the 
sensor was lying on a plane surface and turned around on the 
same place, while permanently measuring the orientation of the 
sensor. All measurements can be described as a rotation about 
the same axis. All angle measurements can further be referred 
to an initial fixed orientation. This model can now be used to 
estimate the error of the observed angles again, separately for 
different distances to the transmitter (see figure 4). The 
resulting equations are not linear a solution is computed using 
the Newton method. Gross error cause a divergence in the 
Newton method. As a result of the divergence some outliers are 
automatically identified. That is the reason, why the number of 
outliers is not as large as in figure 3. 
The image point accuracy is determined empirically using the 
differences of a point that is displayed on the screen and ist 
corresponding image point, that was measured by a user with a 
cross-hair. The control points have been determined with 
superior accuracy. They are assumed to be error free in 
comparison to all other occurring errors. 
  
  
  
  
x 10? 
5 T : 7 T 
* 
4.5} + 
AL 
3.5} * - 
= 3r 
E * + 
© 
© 2.5 4 
+ 
E + + 
* 5 à 
1.5} + + 4 
+ + 
js * + 
* * 
* * 
0.5 3e À + E 
* * 
x * 
0 L 1 1 1 1 1 1 
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 
distance point to transmittter [m] 
Figure 3. Positioning accuracy in different distances. 
  
  
  
  
x 10 
8 T T 
* 
7r a 
+ 
6r J 
S sl ] 
o 
c 
E 
24} 1 
o * 
* 
+ 
3 + 3 . 
3* 
+ + ; + + 
2} + + + + + 1 
* * 
* 
4 1 1 1 n 1 1 
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 
Distanz [m] 
Figure 4. Accuracy of the angles in different distances. 
3. METHODS 
In the following parameter estimation theory (e.g. explained in 
[4]) 1s used for the accuracy estimation of the observations and 
for the estimation of the unknown parameters. The 
implementation of the parameter estimation program is 
employing the general case described in [4]. This general case 
assumes that there are equation systems with conditions 
between several observations and unknowns, extended by 
restrictions between the unknowns. 
The goal is to find the minimum of the weighted square sum of 
the errors. To describe the problem in a formal framework one 
has to formalise the following concepts. Let x be the vector of 
—535- 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.