Full text: XVIIth ISPRS Congress (Part B5)

segment, 
st neigh- 
se values 
ind if so, 
two can- 
| verified 
imation. 
ck a few 
ation, in 
x almost 
tracked 
will be 
pairs 
‘in each 
ted mo- 
the next 
segment 
| for our 
e neigh- 
intersect 
y useless 
is more 
distance 
ause the 
ie stereo 
an esti- 
hese two 
e matrix 
e covari- 
distance 
$3) 
re of the 
d by the 
emporal 
distance 
| nearest 
vely the 
nfidence 
is small 
ans that 
ate even 
all. 
obtained 
erify the 
le epipo- 
umple in 
her seg- 
nstraint 
hes. 
trinocu- 
] to find 
  
As the number of segments in cameras field increases, 
these kinds of algorithms become very time consuming. 
Temporal matching is less expensive. Particularly, when 
the interframe motion is small and therefore, we can re- 
duce the search area. As the cameras system moves, in 
the first step, we try to track the initial stereo matches in 
each camera . Obviously at each step some new segments 
enter the cameras visual fields and some others may have 
not been tracked. Therefore to add new coming segments 
to our set of stereo matchings, we run again a classical 
hypothesis-verification algorithm , see [3], only on those 
few segments. It is not very costly because of the reduced 
number of hypotheses generated. 
6.4 Third Step: temporal matching 
Once a part of the stereo pairs of the segments have been 
temporally tracked, in this step we try to track the other 
2D line segments. We use the results of the first step in 
two different ways. First we mark the segments which take 
part of the stereo-temporal matching segments. Then we 
use the estimated motion obtained in first step to obtain 
the temporal matchings of the 2D limbo line segments, on 
each camera. 
Here, we explain our temporal matching algorithm on 
one of the camera. For the extremities of each segment S;,, 
taken at time £;, we draw the corresponding epipolar lines 
in the image taken by the same camera at time t;+1. As we 
have an estimation of the motion of the camera between t; 
and t;+1, we can consider them as a pair of stereo camera. 
The length of the segment, obtained through edge detec- 
tion and polygonal approximation processes, is not reliable. 
Therefore, we do not expect that the extremities of the seg- 
ment 5;,,, temporal matching of 5,, belong to these epipo- 
lar lines. We define a function F(5;,, $4.) which measures 
the goodness of a temporal matching (5;,, 5;,,, ). 
Suppose e; and e» be the corresponding epipolar lines 
of the two extremities of S. F is defined as follows: 
di + do 
Sti41 
(Sk, , Su ) 
Lmaa 
  
FS S. =o + B 
where d; and d3 are defined as in the figure6.4, and EA 
defines the length of the segment 5,,,,, and in our experi- 
ments o — f — 1 and Zmaz — 5- 
  
  
  
  
  
  
Fig.5. 
The second term of the E tion F (St, Sty.) is to take 
into consideration that after a small motion there is only a 
small change in the direction the 2D line segments. More 
details on this subject can be found in [15]. 
     
7 Results 
We have used several sequences of stereo images obtained 
by our mobilerobot. The baseline is about 43mm, the focal 
length, 8mm and the pixel size, 8x14um?. The distance 
between the objects and the cameras varies from 2m to 
dm. In the experiments presented here, robot rotates 5.0 
degrees around its vertical axis and moves forward 15cm, at 
each step. Experimental results are shown in figures6-13. 
Figures 6-7 and 8-9 display images taken by the first 
and the second cameras, at t, and t respectively. The re- 
sults obtained at different steps of the algorithm are shown 
in figures 10-13. Figures Figures10-11 show the stereo- 
temporal matching segments after one iteration of the first 
step of the algorithm. We also apply the estimated mo- 
tion to the 3D data obtained at ¢; and show their pro- 
jections on the first camera (black segments) overlayed on 
the image taken at ta for comparison. Temporal matchings 
(white segments) are used to update the motion estima- 
tion. Figures 12-13 show the result of the third iterations. 
The motion estimation is improved and we track almost 
all the segments. After the second and the third steps of 
the algorithm almost all the 2D line segments are correctly 
matched. Due to the large number of tracked segments on 
each camera, it is not easy to visualize the results in black 
and white. Therefore, the results on the second and the 
third steps of the algorithm are presented, using the color 
slides, during the conference. 
8 conclusion 
We have presented a unified and iterative algorithm for the 
fusion of the visual data based on the dynamic cooperation 
between stereo matching and temporal matching processes. 
This cooperation is robust and less time consuming than 
doing the classical stereo reconstruction at each step of the 
motion of a mobile robot and the results are quite satis- 
factory. As we use all segments to estimate the kinematic 
screw the method only works if all segments considered ac- 
tually belong to the same rigid object, otherwise it fails. A 
solution for the multiple objects motion analysis based on 
the stereo-motion cooperation is given in [16]. 
References 
[1] J.K. Aggarwal and Y.F. Wang. Analysis of a sequence of im- 
ages using point and line correspondences. In Proc. Int'l Conf. 
Robotics Automation, pages 1275-1280, Raleigh, NC, March 
31-April 3 1987. IEEE. 
J. Arspang. Direct scene determination: Local relative or abso- 
lute surface depth, geometry and velocity from monocular and 
multi ocular image sequences. Technical Report 88/3, Com- 
puter Science Department, University of Copenhagen, January 
1988. 
[3] N. Ayache and F. Lustman. Fast and reliable passive trinocular 
stereovision. In Proceedings ICCV '87, London, pages 422-427. 
IEEE, June 1987. 
[4] T.J. Broida, S. Chandrashekhar, and R. Chellappa. Recur- 
sive 3-D motion estimation from a monocular image sequence. 
IEEE Trans. AES, 26(4):639-656, July 1990. 
[5] Rachid Deriche and Olivier D. Faugeras. Tracking Line Seg- 
ments. In Proceedings of the 1st ECCV, pages 259-268. 
Springer Verlag, April 1990. 
[6] O.D. Faugeras and G. Toscani. The calibration problem for 
stereo. In Proc. IEEE Conf. Comput. Vision Pattern Recog., 
pages 15-20, Miami, FL, June 1986. IEEE. 
[7] Olivier D. Faugeras, Nourr-Eddine Deriche, and Nassir Navab. 
From optical flow of lines to 3D motion and structure. In 
Proceedings IEEE RSJ International Workshop on Intelligent 
Robots and Systems '89, pages 646-649, 1989. Tsukuba, Japan. 
[2 
— 
   
     
  
  
  
  
   
   
    
   
    
    
   
     
     
     
    
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.