Full text: Close-range imaging, long-range vision

     
   
al " 
UT a UAM ter. 
  
that using zero for all approximate values already lead to 
convergence. Only approximated values are needed for the 
rotation parameters, any approximate value for the translation 
will lead to convergence in case the rotation parameters are 
within the pull in range of the system. Some examples of 
initial values that are sufficient to lead to convergence are 
given in Table 1. 
Table 1. Examples of initial values within the pull in range 
  
  
    
      
  
  
  
   
  
Rotation between scans | Initial values that are within 
around x-, y-, z-axis the pull in range 
0° 0° 30° 
0° 0° 60 0° 0° 15° 
60°, 60° and 60° 36°, 36°, 36° 
30° 30° 30° 
10° 0° 40° 070° 0° 
  
  
  
   
  
Table 2. Differences between user-defined transformation and 
transformations found by the algorithm 
X Z 
0.0? -30° 
    
  
   
Rotation (user defined) 
      
  
Rotation (algorithm) 0.17° -29.5° 
Difference 0:179 0.5° 
Translation (user defined) in mm -1000 0 
  
  
Translation (algorithm) in mm -1024.2 | 14.3 
Difference in mm 13.9 -24.2 14.3 
Test 2 
Rotation (user defined) 77.79 164° -39.6? 
Rotation (algorithm) 6.4? -39.7? 
Difference 0.1? 0.0? -0.1° 
Translation (user defined) in mm 20000 [30000 
Translation (algorithm) in mm 
20049 |29963 
Difference in mm 49 -37 
  
   
  
    
  
  
  
  
  
  
  
  
   
    
  
  
  
  
  
  
    
  
  
  
  
  
  
   
The differences in the transformation parameters found by the 
algorithm and defined by the user (Table 2) stem from two 
facts. First the laser points are noisy, which results in finding 
different parameters for an object in case they are estimated 
from different data sets. Secondly the surfaces measured were 
assumed to be planar. In reality this might not be the case 
resulting in different solutions of the plane parameters in case 
different parts of the object have been measured in different 
scans. 
To test the validity of the registration algorithm tests were 
conducted using a perfect measurement strategy. This was 
achieved by first measuring objects in one scan followed by 
measuring the same objects based on exactly the same noisy 
points in the second scan. This strategy resulted in exactly the 
parameters used to create the scans. 
  
Figure 8. Scan 1 and 2 after registration 
5. CONCLUDING REMARKS 
A method that integrates modelling and registration has been 
proposed. Using this method the registration of images can be 
incorporated relatively easy. Drawback of this method is that 
objects need to be available in the scene that can easily be 
parameterised. The method will be used to model industrial 
sites where there is not a lack of well-defined shapes. When 
performing the registration, only very rough rotation 
parameters can be used. Convergence of the system is 
independent of the initial values for the translations. 
The method entirely depends on a human operator. Future 
research will focus on automation of the modelling and 
registration. 
REFERENCES 
Besl, P.J. and McKay, N.D., 1992. 4 method for registration 
of 3-D shapes. IEEE Transactions on Pattern Analysis and 
Machine Intelligence, Vol. 14, no 2, pp 239-256. 
Chen, Y. Medioni, G., 1992. Object modelling by 
registration of multiple range images. Image and Vision 
Computing, Vol. 10, no 3, pp 145-155. 
Dorai, €. Weng, J. and Jain, AK, 1997. Optimal 
registration of object views using range data. IEEE 
Transactions on Pattern Analysis and Machine Intelligence, 
19(10):pp 1131-1138. 
Eggert, D.W., Fitzgibbon, A.W., and Fisher. R.B., 1998. 
Simultaneous registration of multiple range views for use in 
reverse engineering of cad models. Computer Vision and 
Image Understanding, 69(3). Dp. 253—272. 
Ermes, P., 2000. Constrains in CAD Models for reverse 
Engineering using Photogrammetry. Proceedings of the 
XIXth congress of ISPRS, Amsterdam 2000,"International 
Archives of Photogrammetry and Remote Sensing", vol 
XXXIII, part B 5/1, commission V, pp 215-221. 
van den Heuvel, F.A. 1999. 4 Line-photogrammetric 
mathematical model for the reconstruction of polyhedral 
objects. Videometrics VI, Sabry F. El-Hakim (ed.), 
Proceedings of SPIE, Vol.3641, pp.60-71. 
Kunmar, R. and Hanson, A.R., 1994, Robust methods for 
estimating pose and a sensivity analysis. Computer Vision 
Graphics and Image Processing, 60(3), pp 313-342. 
Phong, T.Q., Horaud, Yassine, R. A., and Tao, P.D., 1995. 
Object pose from 2-d to 3-d point and line correspondences. 
International Journal of Computer Vision, 15, pp 225-243 
Shih, T..-Y., 1990. The Duality and critical Condition in the 
Formulation and Decomposition of a Rotation Matrix. 
Photogrammetric Engineering and Remote Sensing ASPRS, 
VI. 56, no 8, pp 1173-1179. 
Stamos, I. and Alien, P.K., 2001. Integration of Range and 
Image Sensing for Photorealistic 3D Modeling. Proceedings 
of the 2000 IEEE International Conference on Robotics & 
Automation, San Fransisco, pp 1435-1440. 
Zhang, Z., 1994. Iterative Point matching for registration of 
free-form curves and Surfaces. International Journal of 
Computer Vision, 13(2), pp 119-152. 
  
  
   
    
    
   
     
    
     
  
   
  
   
     
   
     
    
     
     
     
      
    
      
     
      
   
  
   
  
  
  
   
   
    
    
     
  
  
   
  
  
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.