Full text: Papers accepted on the basis of peer-reviewed abstracts (Part B)

In: Wagner W., Szekely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B 
300 
2. DATA FUSION AND ITS APPLICATION 
IN PHOTOGRAMMETRY 
In photogrammetry, data fusion is a technique of image 
processing for combining two or multi images which 
were acquired from an object in different times or from 
different locations for achieving accurate output. Data 
fusion generally is carried out for change detection, 
updating existing maps, or ortho image production. There 
are generally three groups of data fusion: image to map, 
image to image, and image to database registration. 
Morgado and Dowman (1997) carried out an image to 
map registration for automatic absolute orientation. 
Derenyi (1996) implemented map to image and image to 
map registration for investigation on change detection 
and Bouziani et al (2010) carried out image to map 
registration for change detection. Khoshelham et al 
(2010) utilised image to image registration for change 
detection, and Suveg and Vosselman (2004) combined 
aerial photographs with GIS database for extracting 
building form aerial images. 
Integrating an image with a laser scanning data is another 
technique in photogrammetry for producing ortho-image 
or ortho-rectify-image. Laser scanners are able to provide 
a fair accurate topographic data along intensity values 
from surface of objects that the data can be used for DTM 
generation and 3D model. There are various approaches 
and techniques for integrating images and laser scanning 
data. For example, Iwashita et al (2007) integrated a grey 
scale image on a 3D model which was obtained from a 
laser scanner using fast marching algorithm, or Zhao and 
Popescu (2009) assessed map leaf area index by 
investigation the integration of Quickbird image and 
Lidar data, or Mizowaki et al (2002) registered a CT 
image on MRS image for developing a method of 
treatment for prostate cancer. Obviously, there are 
numerous studies on registration of an image on a scanner 
data for enhancing the image for extracting an object or 
producing a map precisely and accurately. 
In conventional method of image registration despite of 
which mathematical model would be utilised, an image 
which was a two dimensional plane would be finally 
transferred to another two dimensional plane. The main 
aspiration of image registration is to convert an image to 
a map or it is better to say to convert a perspective 
projection to an orthographic projection. Since the 
emergence of digital image in photogrammetry, 
enormous approaches on image transformation have been 
developed and demand on ortho-image has been 
significantly raised. The benefit of using digital image 
over conventional film base photos is the flexibility of 
digital image. Digital images can be easily stretched, 
squeezed, rotated, radiometric and geometric editing, etc. 
But it has to be always remember that the output of an 
image processing is a two dimensional image. There are 
not any defined approaches for omitting or reducing the 
distortion from final output. The existing approaches 
transfer whole of an image according to a mathematical 
modelling which its components are a matrix of rotation 
(R), translation vector (V), and scale factor (s). It requires 
at least four control points for obtaining the elements of 
R, V, s, if R is a 3x3 matrix and V has three components. 
The proposal for this project is based on developing a 
novel approach in order to register aerial images on laser 
scanning data without pre-knowledge about interior 
parameters and providing a robust and reliable output 
which is free from any distortion. Then the approach has 
been extended for registering terrestrial image on a 3D 
model. The approach can be easily expanded to register 
any images to any data such as DTM, DMS, Digital 
Topographic data, and GIS data. According to the 
proposal, images have been initially divided to sub area 
according to geometry of object and topography of 
terrain. Then each sub area will transferred to the host or 
source data pixel by pixel. During transformation, pixels 
are converted to points which their elements includes 
geometric coordinates and intensity values. 
Usually a correlation method has been implemented for 
matching between two or multi images which were 
acquired by similar sensor from an object. Hence, the 
camera sensor and laser scanning sensor are different a 
new approach has been developed for matching between 
the image and laser scanning data and 3D model. The 
laser scanners are acquiring data from surface of object 
and provided those data in point clouds format; however, 
a digital image is two dimensional format consists of 
pixels and rasters which included intensity values. 
The existing correlation matching for sequence images 
has been developed based on comparison between the 
gradients of intensity values of a pixel with its 
neighbouring pixel on images. Correlation matching has 
been recognised as template matching, cross-correlation, 
and convolution. In contrast, scanning matching 
approaches for matching laser scanning data have been 
developed based on presentation of location of points in 
the 2D or 3D space and included point to point matching 
e.g. Iterative Closest Point (ICP), or feature based 
matching, or point to feature matching. 
In this study the matching between an image and data 
scanner data has been proposed as following:
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.