Full text: Proceedings, XXth congress (Part 7)

se- 
al, 
on, 
Iti- 
the 
278, 
ay. 
om 
the 
er). 
t of 
1sor 
cost 
can 
eory 
ical. 
ving 
adio 
sed 
aft’s 
| the 
the 
zular 
id to 
ectly 
data 
id be 
case 
ough 
ution 
veral 
nbine 
ferent 
is no 
nd at 
ments 
reated 
e also 
' data, 
International Archives of the Photogrammetry, Remote Sensing 
Surprisingly, some manufactures of lasers scanners discard 
some information generated by their scanners. All optical 
three-dimensional (3D) scanners measure the reflectance 
information generated by the intensity of the returned laser 
beam but in many cases, the manufacturer eliminates that 
important information from the raw 3D image file. In an 
inspection application, El-Hakim et al. 1994 show that the use 
of intensity data (reflectance) produced by a range camera can 
improve the accuracy of vision-based 3D measurements. The 
authors provide a survey (pre-1994) of multi-sensor data fusion 
methods in the context of computer vision. 
Wendt et al. 2002 present an approach for data fusion and 
simultaneous adjustment of inhomogeneous data intended to 
increase the accuracy and reliability of surface reconstruction. 
They aimed at an approach to adjust any kind of data in a 
combined adjustment and to give adequate weights to each 
measurement. Their study is based on 3D data obtained from 
stripe (fringe) projection and photogrammetry-based systems. 
To validate their approach, they use two types of free-form 
object surfaces, one being artificial and known is used for test 
purposes and the other is a tile made of concrete. Johnson et al., 
2002 describe a technique for adaptive resolution surface 
generation from multiple distributed sensors. They demonstrate 
the technique using 3D data generated by a scanning lidar and a 
structure from motion system. Other authors compare and 
discuss practicality issues of laser scanning and digital close 
range photogrammetry (Velios et al, 2002; CIPA&ISPRS, 
2002). Increasingly, laser scanning and photogrammetry are 
combined for many applications. These applications include 
documentation of as-built sites like offshore oil and gas 
structures, process plants, nuclear and power generation 
stations, architectural and construction sites, industrial 
manufacturing facilities, automotive production, space 
exploration and cultural heritage. 
In this paper, resolution, uncertainty and accuracy of 3D 
information measurement in the context of close-range 3D 
Systems are discussed. Laser scanners are reviewed in more 
details compared to photogrammetry. A number of examples 
illustrating the importance of sensor characterization are shown. 
Some comments about the impact of a user in a project are also 
presented. The goal is not to survey all commercial 3D vision 
Systems or present an exhaustive list of tests of the systems 
chosen for this paper. Instead, some basic theory about 3D 
sensing is presented and is accompanied by selected results that 
should give the reader some pointers in order to become more 
critical when picking 3D vision systems and a sensor fusion 
strategy. 
2. OPTICAL SENSORS FOR THREE-DIMENSIONAL 
MEASUREMENTS 
In the last twenty years, many advances have been made in the 
field of solid-state electronics, photonics, computer vision and 
computer graphics. Non-contact three-dimensional | (3D) 
measurement techniques like those based on structured light 
and passive stereo are examples of fields that have benefited 
from all of these developments. In the case of passive 
techniques (that use ambient light), only visible features with 
discernable texture gradients like on intensity edges are 
measured. Active systems and in particular, laser-based systems 
are used to structure the environment in order to acquire dense 
range maps from visible surfaces that are rather featureless to 
the naked eye or a video camera. In order to take full advantage 
973 
and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004 
of these vision systems, one must understand not only their 
advantages but also their limitations. Baltsavias. 19992 
compares photogrammetry and airborne laser scanning. This 
section reviews the basic principles and best practices that 
underline laser scanners and digital photogrammetry for 3D 
vision systems in the case of close-range applications. We 
emphasize laser scanning, as one specific scanner can't be used 
for volumes of different sizes. 
2.1 Laser scanners 
Active sensors that use light waves for 3D measurements can be 
divided into classes according to different characteristics. A 
number of taxonomies exist in the literature (Nitzan, 1988; 
Jühne et al., 1999). Here we summarize the main classes and 
give the practical operating distance camera-to-object: 
Triangulation: distance scanner-object about 0.1 em to 500 em 
e Single spot (1D) 
. Profile measurement (2D) 
* — Area measurement (3D really 2.5D) 
o  Galvanometer-based laser scanning 
o Laser probe combined with translation- 
rotation motors, articulated arms and 
coordinate measuring machines (CMM), 
position trackers 
o  Multi-point and line projection based on 
diffraction gratings 
o Fringe and coded pattern projection 
o  Moiré effect 
Time delay & light coherence 
e Time of flight: 100 cm to several km 
o Single point and mirror-based scanning 
e Pulsed lasers 
e AM or FM modulation 
o Full field using micro-channel plates or 
custom build silicon chips (pulsed or AM). 
* Interferometric and Holographic: wide distance range 
  
EA Laser 
= source 
     
   
Optical Laser 
y Center spot 
AZ sensor 
  
Figure 1. Laser-based optical triangulation (single spot). 
2.1.1 Triangulation 
Triangles are the basis of many measurement techniques, from 
basic geodesic measurements performed in ancient Greece to 
16" century theodolite-based surveys and now modern laser- 
based (or projector-based) 3D cameras. The basic geometrical 
principle of optical triangulation is shown in Figure 1. To 
acquire a full 3D image, one of the scanning techniques listed 
above can be used. The collection of the scattered laser light 
from the surface is done from a vantage point distinct from the 
projected light beam. This light is focused onto a linear position 
sensitive detector (herein called laser spot sensor) Knowing 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.