Full text: Proceedings, XXth congress (Part 5)

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B5. Istanbul 2004 
  
accuracy needed for measurements of objects at long ranges. In 
each image of a sequence, objects are detected and compatible 
objects are connected to tracks within the sequence. 
Additionally, the correspondence problem between objects in 
sequences of different sensors will be solved for each time step. 
Then, the three-dimensional position of a potential object will 
be determined by resection in space. Last, the object positions 
of each time step are transformed into a trajectory within a 
space-time cube and from this, three-dimensional vectors of 
velocity will be calculated. 
In this paper we describe investigations about the accuracy and 
reliability of this approach, mainly concerning the last two steps 
(the three-dimensional position and the velocity of the objects). 
Unavoidable uncertainties in the measurement of the two- 
dimensional object position in the sensor focal plane lead to 
rather large errors in the estimated distance, which in turn affect 
the accuracy of velocity extraction from the sequence. 
We present a quantitative analysis of this issue, resulting in 
statements about fundamental restrictions for the velocity 
estimation of objects. These considerations of accuracy and 
reliability are important for the design of multi-ocular IRST 
systems. 
A measurement campaign was carried out to capture image 
sequence data with real objects using IR sensors. It will be 
shown that by considering the fundamental restrictions an 
adaptive processing leads to more robust results for the 
estimation of the spatial position and velocity. This information 
can be effectively used to reduce the FAR. 
2. POSITION ACCURACY 
To get reliable information about the three-dimensional object 
position and velocity it is necessary to ascertain the accuracy of 
these values. Since the accuracy of the three-dimensional object 
velocity depends on the accuracy of the three-dimensional 
object positions calculated for each image-pair in the sequence, 
we first discuss the fundamental limitations in the accuracy of 
the object position and use this result to obtain the accuracy of 
the calculated object velocity. 
[n order to find fundamental limitations in position and velocity 
accuracy, we assume everything being as perfect as possible. 
For example, the sensors are exactly identical with the same 
focal length and pitch (distance between centres of two adjacent 
detector elements) and can be approximated by pinhole 
cameras. In addition, the two sensors are aligned exactly 
parallel. Figure 1 shows a sketch of this situation and introduces 
the coordinate system with the z-axis along the viewing 
direction of the sensors and the x-axis along the baseline. The 
baseline is given by the distance of the sensors perpendicular to 
their alignment. 
Also shown in Figure 1 are the images of an object for both 
sensors. From one image the range of the object is not known, 
but the object position in the image of one sensor (e.g. left) 
together with the pinholes of the two sensors define a plane, 
which intersects the focal-plane of the other sensor (right) in the 
so called epipolar line. In the ideal case the image of the object 
in this (right) sensor lies exactly on this epipolar line. The 
difference of the object positions on this epipolar line, which is 
in our case the difference in the horizontal positions xin, and 
Xigg», 18 called disparity and measured in units of the pitch. 
  
    
sensor 1 sensor 2 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
—X 
imagel | y. image 2 
e € 
——bP.- — 
Ximal X ue 
  
  
  
Figure l. Arrangement of the stereoscopic system with 
focal-length / and baseline 5 and the two sensor 
images of an object with horizontal positions x, 
and Ximg2- 
The three-dimensional position of the object is given either by 
the direction (two angles) and range or the three Cartesian 
coordinates (x, y and z). Both representations are equivalent 
and can be transferred easily into each other. The direction is 
given by the two-dimensional object position in the image of 
one sensor. The range can be calculated with the z-component 
of the three-dimensional object position. The value of the 
z-component is given with the baseline (5), focal-length (/), 
pitch (a) and disparity (d) by the simple equation (1): 
b : f Z max 
Z = Mme (1) 
a-d d 
The quantity z,44, is defined by the three parameters baseline, 
focal-length and pitch of the stereoscopic system. It has the 
dimension of length and introduces a natural measure of length 
for a given system. In addition, Zmax is the greatest distance 
distinguishable from infinity for the given system. 
Since we want to look at objects at long ranges the size of the 
physical image of the object in the focal plane is usually smaller 
than the pixel size. This means that the minimal uncertainty of 
the direction of the object is given by the instantaneous field of 
view (IFOV) of the concerning pixel. The intersection of the 
two IFOVs from the two sensors leads to a volume in space 
which defines the uncertainty of the object position. In Figure 2 
a horizontal section through this volume is shown. 
   
     
  
  
  
  
  
  
  
  
  
   
  
   
     
   
  
  
  
    
   
  
  
    
  
    
     
   
   
    
   
   
    
    
   
   
   
    
   
   
   
    
    
    
   
  
  
  
  
  
Fi 
Fo 
un 
the 
is | 
the 
qu 
the 
un 
ab: 
foi 
Z-C 
gh 
me 
egi
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.