Full text: XVIIth ISPRS Congress (Part B5)

n. Using 
| and V. 
terframe 
2 and V 
informa- 
ments of 
l'he next 
le in the 
Jnder- 
g pro- 
in com- 
e matri- 
compute 
(see sec- 
ood and 
ribed by 
:ables. 
the mea- 
'ete-time 
(12) 
(13) 
dynamic 
D [ri] = 0 
urbed by 
surement 
relation- 
cable, as 
= Ig, and 
  
And for the initialization, we take Pojo = co?Is and Sop=0 
(see section 4). 
Replacing these in the above Kalman filter equations, 
we calculate the Kalman gain matrix K;, and the covari- 
ance matrix of states P: 
REI 9j 
0 As 
e A,H;H7 + RIN L; LY h;ML;nf 
File = ler | h;Mn;LT Asn;n? | 
with À; = Ti and A; = TRE Here, we suppose 
that A,; is a 2 x 2 diagonal matrix, with o;? and fj? on the 
diagonal. This is only to simplify the symbolic calculations 
and does not effect the validity of our interpretations. 
The matrix P; is expressed in the camera coordinate 
system. Let us express this matrix in a coordinate system 
define by the orthonormal vectors n;, Li, and H; at the 
same origin O. In this coordinate system we have: 
0 4295. 0. 0.9 
0 37x A 050 
tbe ce o | a WARTEN 
A =e(67|0;h)7 05-42 0:01) 
(out 0: o0 eis 
02.0 0 022080 
We write also the state vector in the same coordinate 
system: 
x, [0 Qr, Op; Va Vr, Vas] 
From P; the associated covariance matrix of X;, we see 
easily that no information is brought by segment S; in the 
direction of n; for £2, and in the plane defined by L; and H; 
for the estimation of V. Looking at F; and Z; (equation 
11), we see that the information we have on 7, does not 
depends on depth h;. And P; tells us that the Kalman 
Filtering process is also taking it into consideration. 
6 Overview of the algorithm 
In this section, we outline our algorithm to solve the stereo- 
motion cooperation problem that arises in the context of 
a mobile vehicle navigating in an unknown static environ- 
ment. A trinocular stereo vision system mounted on the 
mobile vehicle has been calibrated, and the algorithm de- 
scribed in this paper uses only two cameras. Our algorithm 
is applied to a sequence of stereo pairs obtained by the 
robot. Two adjacent pairs of stereo images are illustrated 
in figures6-7 and 8-9. 
The algorithm has three different steps, see figures. In 
the first step, we only track the image segments on the first 
and the second cameras which have been found to be the 
images of the same 3D lines. These segments are obtained 
through a stereo matching process described later in this 
paper. In this step we also obtain an estimation of the ego- 
motion of the cameras system. In the second step, we first 
update our 3D lines using the 3D-2D matches obtained in 
the first step. Then, we eliminate the stereo matches found 
in the first step and use epipolar constraint to find the 
stereo pairs of lines which come in the field of view. In this 
way, we not only update the available 3D line, but also we 
find the new stereo pairs which appear in the field of view of 
the both cameras. In the third step, we use the estimated 
ego-motion for the tracking of the rest of line segments 
on each camera. That gives us the temporal matchings 
on each camera. Finally, we use the results of the three 
steps of the algorithm to complete our three-dimensional 
reconstruction of the robots environment. Now, we explain 
in more detail the three different steps of the algorithm. 
   
Frame n 
camera 1 camera 2 
Edge Detection 
   
   
  
  
  
2D feature list 2D feature list 
  
  
  
  
  
  
  
  
  
  
  
sus 3D feature list| — Aa 3D feature list 
  
  
  
  
  
e 2D limbo list 2D limbo list 
  
  
  
  
  
     
  
  
  
Match list 
ee 
Stereo match 
err 
   
   
  
3D feature 
update 
Updated lines Match list 
   
  
  
  
  
  
  
  
  
3D Instantiate 
&upd 
ate 
  
        
3D Instantiate 
Updated lines EU EIOS 
  
I Firststep D Secondsep Third step 
Fig. 3. 
6.1 First step: Temporal matching of stereo 
pairs 
Suppose that at time 11, we have a set of 3D line segments, 
which are reconstructed based on all stereo pairs available 
up to 44. The reconstruction technique will be described 
later. In the initialization, the stereo algorithm described 
in [3] is used to obtain the first set of 3D line segments. 
We have also an initial estimate of motion between #; and 
153. Under the assumption of smooth motion, this motion 
estimation can be derived using information obtained by 
analysing previous stereo pairs. In the initialization phase, 
the motion estimate can be obtained from the odometric 
system of the mobile vehicle. In our implementation, since 
the interframe motion is assumed to be small, motion pa- 
rameters are initialized to zero (i.e., no motion), but with 
a big covariance (i.e., very uncertain). That makes the 
initialization phase much more difficult. 
First of all, the stereo reconstructions help us to choose 
the pairs of segments which correspond to the 3D lines 
closer to the camera. These pairs of stereo matches give us 
more robust depth information and once they have been 
tracked, they give us also more robust motion information. 
We then apply the estimated motion to such 3D lines and 
project them on the cameras. In figure 4, D(¢;) is an ex- 
ample of such 3D lines and D(tz) is its transformed line. 
The dashed lines on the cameras are its projections. 
We then find the nearest 2D segments to those pro- 
jected segments in the next images. To compute the dis- 
tance between reprojected segments and the next image 
      
    
    
    
   
     
     
    
  
    
    
     
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.