Full text: Proceedings, XXth congress (Part 4)

. Istanbul 2004 
  
  
WIHS 
0.897 
  
WIHS 
0.819 
0.846 
0.721 
  
ensor 
hods, 
mote 
High 
RSIS 
1etric 
usion 
form. 
. 
welet 
matic 
Data. 
Vol. 
98. A 
A and 
al of 
and 
nth 
sosci. 
99, 
AUTOMATIC FUSION OF PHOTOGRAMMETRIC IMAGERY AND LASER SCANNER 
POINT CLOUDS 
Eric K Forkuo and Bruce King 
Department of Land Surveying & Geo-Informatics 
The Hong Kong Polytechnic University 
Hung Hom, Hong Kong 
eric.forkuo@polyu.edu.hk, bruce king@polyu.edu.hk 
KEY WORDS: Laser scanning, Photogrammetry, Fusion, Matching, Registration, Multisensor, Terrestrial 
ABSTRACT 
Fusion of close range photogrammery and the relatively new technology of terrestrial laser scanning methods offer new 
opportunities for photorealistic 3D models presentation, classification of real world objects and virtual reality creation (fly through). 
Laser scanning technology could be seen as a complement to close-range photogrammetry. For instance, terrestrial laser scanners 
(TLS) have the ability to rapidly collect high-resolution 3D surface information of an object. The same type of data could be 
generated using close range photogrammetric (CRP) techniques, but image disparities common to close range scenes makes this an 
operator intensive task. The imaging systems of some TLSs do not have very high radiometric resolution whereas high-resolution 
digital cameras used in modern CRP do. Finally, TLSs are essentially Earth-bound whereas cameras can be moved at will around 
the object being imaged. This paper presents the result of an initial study into the fusion of terrestrial laser scanner generated 3D 
data and high-resolution digital images. Three approaches for their fusion have been investigated - data fusion which integrates data 
from the sensors to create synthetic perspective imagery; image fusion (synthetic perspective imagery and the intensity images); and 
model-based image fusion (2D intensity image and the 3D geometric model). Image registration, which includes feature detection 
and feature correspondence matching, is performed prior to fusion, to determine the relative rotation and translation of the digital 
camera relative to the laser scanner. To overcome the differences in datasets, a feature and area based matching algorithm was 
successfully developed and implemented. Some results of measurements on interest points and correspondence matching 
are presented. The result of the initial study shows that most promise is offered by model-based approaches. 
high-resolution perspective 2D imagery and high-resolution 3D 
1. INTRODUCTION point cloud data. Our setup uses 3D point cloud data from 3D 
laser scanner and 2D intensity image from an independent 
Of recent, close range photogrammery (CRP) and the relatively CCD camera. These equipment provide independent datasets 
new technology of terrestrial 3D laser scanning (TLS) are used (geometry and intensity) and beg the question as to how can we 
to automatically, accurately, reliably, and completely measure accurately express these complementary datasets in a single 
or map, in three-dimensions, objects, sites, or scenes. object centred coordinate system. Also, matching features 
Terrestrial 3D laser scanner has the ability to rapidly collect between an intensity image and the geometry automatically in 
high-resolution 3D surface information of an object or scene. such a multi-sensor environment is not trivial task (Pulli and 
The available scanning systems extend to all objects types, Shapiro, 2000). It can be close to impossible due to the fact that 
almost regardless of the scale and complexity (Barber er al, the datasets are independent, dissimilar (Boughorbal ef al, 
2001). The same type of data could be generated using close 2002), which differ in resolution, field of view, and scale. 
range photogrammetric (CRP) techniques, but image disparities 
common to close range scenes makes this an operator intensive 
task. The imaging systems of some TLSs do not have very high 
radiometric resolution whereas high-resolution digital cameras 
used in modern CRP do. Also, TLSs are essentially Earth- 
bound whereas cameras can be moved at will around the object 
being imaged. It is intuitive then to consider the fusion of data 
from the two sensors to represent the objects and scenes, and to 
create models that are more complete, and thus easier to 
interpret, than a model created from the 3D point cloud data 
alone (Elstrom et al, 1998). This fusion, which is not 
application specific, can be useful in: texture-mapping the 
point cloud to create photo-realistic 3D models which are 
essential for variety of applications (such as 3D city models, 
virtual tourist information as well as visualization purposes); 
extraction of reference targets for registration and calibration 
Purposes ( El-Hakim and Beraldin, 1994); automation of 3D 
Measurement (automatic exterior orientation); — 3D 
leconstruction; and if the data is geo-referenced, it can be 
readily incorporated into existing GIS applications. In section 2 of this paper, the data multisensor data fusion 
methodology and integration models are discussed. Section 3 
deals with the multisensor image matching procedure. Section 
4 describes the mode-based image fusion. The results are 
This paper focuses on three distinct approaches to the 
multisensor fusion task. The first one is data fusion which 
integrates data from the two sensors (3D point cloud data and 
2D intensity image). The advantage is that the existing 
traditional image processing algorithms can operate on this 
generated synthetic image. Also, to register this image to 
intensity image is much easier task that registering the 2D 
image into the 3D point clouds directly. The second one, on the 
other hand, is image fusion which involves feature detection 
and feature correspondence matching between the generated 
synthetic image and the intensity image acquired with digital 
camera. The third one which is the model-based image fusion 
is to relate each pixel in the 2D intensity image data to its 
corresponding sampled 3D point on the object surface. The 
task is to determine the relationship the coordinate systems of 
the image and the object. The result of this procedure is that the 
intensity image and the geometric model are positioned and 
oriented in the same coordinate systems. 
Fusing data taken from two different sensors requires that the 
multisensor data have to be correctly registered or relatively 
aligned and this paper therefore describes an approach to fuse 
921 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.