Full text: Proceedings, XXth congress (Part 4)

2004 
M 
, See 
  
1ation 
n the 
more 
04). . 
laser 
:gular 
erates 
re are 
is to 
point 
econd 
e real 
or the 
2 SCH 
where 
nding 
tional 
' and 
image 
e SCI 
. This 
led by 
th the 
  
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004 
  
3. MULTISENSOR IMAGE FUSION 
In many image processing applications it is necessary to 
compare multiple images of the same scene acquired by 
different sensors or image taken by the same sensor but at 
different times or from different locations. This section 
describes the process of matching multi-source data of the 
same scene acquired from different viewpoints and by 
different. The purpose of multi-sensor image matching in this 
paper is to establish the correspondence between RCI and SCI, 
and to determine the geometric transformation that aligns one 
image with the other. Existing multi-sensor image matching 
techniques fall into two broad categories: manual image and 
automatic image matching the implementation and results of 
manual multi-sensor image matching, which includes, interior, 
relative and absolute orientations using two different types of 
software for comparison purposes, has been discussed in 
Forkuo and King (2003). The manual measurement was 
necessary to understand the key issues such as geometric 
quality, both spatial and geometric resolutions of the generated 
synthetic camera image. 
3.1 Automatic Multisensor Image matching 
Once the 2D intensity image has been generated from the 3D 
point cloud, the location of corresponding feature in the 
Synthetic Camera Image (SCI) and the Real Camera Image 
(RCI) is determined. The most difficult part of the automatic 
registration is essentially the correspondence matching: Given 
a point in one image, find the corresponding point in each of 
the other image(s). Although the automatic correspondence is 
not a problem for vertically oriented images, it is still a 
problem in the terrestrial case and it is even much complex in 
terrestrial multi-sensor case. It can be observed that, since both 
image types are formed using similar mechanisms, the location 
of many objects are identifiable in cach image. However, there 
are differences in illumination, perspective, reflectance as well 
as lack of appropriate texture (Milian et al, 2002) between 
these images. Also, images from different sensors usually have 
their own inherent noise (Habib and Alruzoug, 2004). 
Furthermore, the automatic registration problem can be 
complicated, in our case, by differences in image resolution 
and scale, and low image quality (especially with the SCD. 
One approach to automatically overcome the correspondence 
problem is both area and feature based approach was used 
(Dias et al, 2002). The first step for correspondence matching 
or simply pairwise matching is the extraction of features, 
generally interest points from both images using Harris corner 
detector. Initial correspondence between these points is then 
established by correlating regions around the features. The 
similarity is then judged by the accumulated development of 
corresponding interest points in the two images (Rothfeder et 
al, 2003). We have discussed the matching algorithm which 
consists of feature extraction process followed by the cross 
correlation matching in Forkuo and Bruce (2004). 
3.1.1 Automatic Feature Detection and Extraction 
The automatic registration problem requires finding features 
(edges, corners) in one image and correlates them in another. 
For this paper, Harris corner detector as proposed in Harris and 
Stephens (1988) is used detect and extract corners in both 
images. This operator has been widely used and it has been 
shown to be robust to viewpoint changes (i.e. image rotations 
and translations) and illumination changes (Dufournaud er al, 
2004; Rothfeder er al, 2003). However, the Harris point 
detector is not invariant to changes in scale (Dufournaud er al, 
2004. It uses a threshold on the number of corner extracted 
based on the image size. The number of corners detected in 
images is variable (Rothfeder er al, 2003) and in figure 4, the 
two images are shown with the detected corners features. Once 
feature points are extracted from image pair, correspondence 
matching can be performed. 
3.1.2 Correspondence matching 
This section concentrates on determining corresponding 
between two sets of extracted interest points that were detected 
with Harris corner operator. To match these features 
automatically, the zero mean normalized cross correlation 
(ZNCC) measure, which is invariant to varying lighting 
conditions (Lhaullier and Quan, 2000) is used. This method 
uses a small window around each point to matched (this point 
becomes the center of a small window of gray level intensities), 
and this window (template) is compared with similarly sized 
regions (neighborhood) in the other image (Rothfeder ef al, 
2003). In other words, the ZNCC method is based on the 
analysis of the gray level pattern around the detected point of 
interest and on the search for the most similar pattern in the 
successive image (Giachetti, 2000). Each comparison yields a 
score, a measure of similarity. The match is assigned to the 
corner with highest matching score (Smith ef a/, 1998). 
By selecting a suitable patch size (correlation window) and 
threshold for the matching process reduces the number of 
detection of false correspondence pairs. However, in our case, 
the number of mismatches (referred to as outliers) may be quite 
large (as can be observed in figure 5). This occurs in particular 
when some corners cannot be matched. Also, there are likely to 
be several candidates matches for some corners which are very 
similar (Smith ez a/, 1998). These correspondences are refined 
using a robust search procedure such as the RANdom SAmple 
Consensus (RANSAC) algorithm (Capel and Zisserman, 2003; 
Fischler and R. C. Bolles, 1981). This algorithm allows the 
user to define in advance the number of potential outliers 
through the selection of a threshold. The best solution is that 
which maximizes the number of points whose residuals are 
below a given threshold. Details can be found in Fischler and 
R. C. Bolles (1981). Once outliers are removed, the set of 
points identified as inlers may be combined to give the final 
solution (RANSAC inliers) and the result is shown in figure 6. 
These inlying correspondences are used in the model-based 
image fusion. 
4. MODEL-BASED IMAGE FUSION 
In this context, model-based fusion is the process of 
establishing a link between each pixel in the 2D intensity 
image data to its corresponding sampled 3D point on the object 
surface. The task is to determine the relationship the coordinate 
systems of the image and the object by photogrammetric 
process of exterior orientation. The exterior orientation process 
is achieved in two steps. For the first step, we relate each 
matched pixel of the extracted feature in the SCI data to its 
corresponding 3D point from the point cloud data using 
interpolation constants. That is, the automatic link between the 
object coordinate system and the image coordinate system has 
been established. This means that the image coordinate, object 
 
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.