Full text: Real-time imaging and dynamic analysis

it though such 
jy the filtering 
‚a noise part 
t of a relation 
sult than the 
> we can get 
ability method) 
e it possible to 
) that obtained 
f the detection 
e reduction of 
thought to be 
r. SUGIYAMA. 
IYAMA for his 
ino, Hirofumi 
Drawing for 
, International 
insing, Vienna, 
-reeman and 
5, Height and 
an Conference 
114 
  
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 
POSITION ERROR ANALYSIS OF A 3D TRACKING SYSTEM 
Aranda, J. (1), Gibert, K. (*), Climent, J(T). and Grau, A(1). 
(+) Dep. of Automatic Control & Computer Engineering. 
(*) Dep. of Statistics and Operations Research 
Universitat Politécnica de Catalunya 
Pau Gargallo,5. 08028. Barcelona. SPAIN. 
E-mail: aranda 2 esaii.upc.es 
Commission V, Working Group IC V/III 
KEY WORDS: Tracking, real time, image processing hardware, error modelling, error propagation. 
ABSTRACT 
A new 3D tracking method is presented which makes use of a specific image processing hardware developed in our laboratory. This 
image processor performs at video rate an image transformation consisting on the computation of the distance from each pixel in the 
image to the contour pixels around it (if present). With the aim of minimize the cost of this processing hardware only eight distance 
values in a 15x15 pixels window are obtained for every pixel, corresponding to that of the eight main directions (N, NE, E, SE, S, 
SW, W, NW). This seems to be enough for many tracking applications as have been proved. This vector of distances identifies 
singular points in the contour image and it is used in their recognition process (applied both in stereo matching process and also in 
sequence matching process). Two main errors disturb the output of the tracking system (tridimensional position of these singular 
points along the time): image resolution and localization error of contour pixels. Modeling and propagation of these two main errors, 
both in image transformation process and also in recognition/position estimation processes is fully explained. 
1. INTRODUCTION 
Tracking systems based on computer vision are sensible to 
those errors coming from image formation and processing. 
Accuracy of the position measurements of the tracked target 
depend on these errors. However, usually little effort is made in 
order to evaluate how these errors disturb the output of the 
system. In this paper we tackle this problem for the particular 
case of an implemented tracking system based on a specific 
image processing hardware. 
Efficiency of tracking systems can be measured by two 
parameters (usually opposed): reliability of recognition process 
and its execution time. The last one determines the system 
sampling period and obviously it has to be as short as possible. 
All tracking methods include a compromise solution to balance 
this trade-off, usually by limiting the set of targets that can be 
recognized and the circumstances in which they can be tracked. 
In order to maximize reliability without penalize the sampling 
period, huge and expensive computational resources are 
required to perform real time tracking. This circumstance limits 
massive application of such systems in industry [Amat,93]. 
In our case, a polar representation of image objects contours has 
been chosen for recognition [Gonzalez,87]. This polar 
descriptor permits to reduce contour representation from two 
dimensions to one. It also permits an easy way for size, position 
and orientation normalization of the object contour. Contour 
rotation appears as a translation in the transformed space, so the 
transformed description is easier to track in front of object 
rotations. For these reasons, variations on polar transform have 
been used by a lot of authors as a previous step in pattern 
recognition [Jeng,91] [Sekita,92] [Friedland,92]. 
However, in the presented tracking system the polar transform 
is only applied locally to those singular regions (local features) 
present in the object contour [Amat,92]. The polar 
transformation has been reduced and optimized in order to 
671 
implement it with a low cost hardware. In this way the 
transformation has been limited to a 15x15 pixels region, from 
which they are selected only 8 radii in the 8 main directions (N, 
NE, E, SE, S, SW, W, NW). These radii represent the distance 
from the central pixel of the analyzed region to the first contour 
pixel found in the corresponding direction (figure 1). 
7 0 1 
  
  
  
  
  
  
  
  
Figure 1(a). Distribution of radii in the transformation window 
  
  
  
p(8) 
7 0 
6 
5 
4 
$ 
2 
| 
Q—r—-—-——-——1 gI——1——31 
Oi Zee 2:6 7 3 53 
Figure 1(b). Resulting vector descriptor p(6). 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.