Full text: XIXth congress (Part B5,1)

El-Hakim, Sabry 
3.2 Accuracy Verification 
In the tests shown in figure 7.A and C, several distances were measured between 3D points. The differences between 
the computed distances and the directly measured distances were computed. Also radius computed from surface fitting 
of the globe and a circle on the rim in figure 7.B are compared to directly measured values. Deviations from plane in the 
walls showing in figure 7.D were also computed. Table 1 displays the results. Accuracy is estimated to be from 1: 2300 
to 1: 6000, which is good considering that natural features were used for image registration and point measurement. 
  
  
  
  
  
  
  
  
  
Description Actual value | Difference Relative to site size 
Distances dl, figure 6-A (5 m view) 1215 -1.5 1: 3700 
d2, figure 6-A 3810 -1.9 1: 2900 
length of bench, figure 6-C (3 m view) | 2001 -0.5 1: 6000 
width of bench, figure 6-C 300 1:3 1: 2300 
between targets 13-14, figure 6-C 802 0.5 1: 6000 
Fitted surfaces Sphere radius in figure 6-B (2 m view) | 300.5 0.35 1: 5700 
Circle (rim) radius in figure 6-B 351 0.85 1: 2300 
Planes in figure 6-D (2.5 m view) plane 0.60 offplane | 1: 4200 
  
  
  
Table 1: Results of accuracy tests. The values and differences are in mm. 
4 CONCLUSIONS AND FUTURE WORK 
The described system exhibits notable improvement in flexibility, accuracy, and completeness over existing approaches. 
The system is mostly interactive with easy to use interface. Depending on the type of surface and the environment, 
certain components are automatic. The main advantage of the approach is its flexibility in that it can use image-based 
modeling from multiple or single images, combine multiple image sets, use data from positioning devices, and 
integrates data from range sensors such as laser scanners. The accuracy achieved by applying a complete camera model 
and simultaneous photogrammetric global adjustment of bundles is sufficient for most applications. 
Although this interactive system can be used to model a wide spectrum of objects and sites, it is still desirable to reduce 
human intervention, particularly when using a large number of images. Automation is particularly needed for: 
e image acquisition and view planning (incremental on-site modeling may be needed), 
e point extraction and matching before registration, especially for widely spaced camera positions, and 
e determining point connectivity by segmentation of 3D points into groups. 
Occlusions and variations in illumination between images affect existing automatic methods for correspondence and 
image registration. Therefore, they need images taken at close intervals, which result in too many images as well as 
reduced geometric accuracy. In addition, the resulting 3D points are not likely to be suitable for modeling. Therefore, 
improved automated methods that do not suffer from these shortcomings are the subject of future research. 
ACKNOWLEDGMENTS 
I would like to thank my colleagues Angelo Beraldin and Luc Cornouyer for providing the range sensor data. 
REFERNENCES 
Baillard, C., Zisserman, A., 1999. *Automatic reconstruction of piecewise planar models from multiple views", IEEE 
Conference on Computer Vision and Pattern Recognition, pages 559-565. 
Beraldin, J.-A., Blais, F., Cornouyer, L., Rioux, M., El-Hakim, S.F., 1999. “3D imaging for rapid response on remote 
sites”, SIGGRAPH'99 Technical Sketches, pp. 225. 
Chapman, D., Deacon, A., 1998. "Panoramic imaging and virtual reality - filling the gaps between the lines", ISPRS 
Journal for Photogrammetry and Remote Sensing, 53(6), pp. 311-319, December. 
  
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 209 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.