Full text: Papers accepted on the basis of peer-review full manuscripts (Part A)

  
ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision‘, Graz, 2002 
  
AUTOMATED IMAGE REGISTRATION USING GEOMETRICALLY INVARIANT 
PARAMETER SPACE CLUSTERING (GIPSC) 
Gamal Seedahmed and Lou Martucci 
Remote Sensing and Electro-Optics Group 
Engineering Physics Division 
Pacific Northwest National Laboratory” 
902 Battelle Blvd. Richland, WA, 99352 USA 
Gamal.Seedahmed@pnl.gov 
  
Lou.Martucci@pnl.gov 
  
Commission III, WG III/I 
KEY WORDS: Automation, Image Registration, Hough Transform, Geometric Invariance, Clustering 
ABSTRACT: 
Accurate, robust, and automatic image registration is a critical task in many typical applications that employ multi-sensor and/or 
multi-date imagery information. In this paper we present a new approach to automatic image registration, which obviates the need 
for feature matching and solves for the registration parameters in a Hough-like approach. The basic idea underpinning GIPSC 
methodology is to pair each data element belonging to two overlapping images, with all other data in each image, through a 
mathematical transformation. The results of pairing are encoded and exploited in histogram-like arrays as clusters of votes. 
Geometrically invariant features are adopted in this approach to reduce the computational complexity generated by the high 
dimensionality of the mathematical transformation. In this way, the problem of image registration is characterized, not by spatial or 
radiometric properties, but by the mathematical transformation that describes the geometrical relationship between the two images or 
more. While this approach does not require feature matching, it does permit recovery of matched features (e.g., points) as a useful 
by-product. The developed methodology incorporates uncertainty modeling using a least squares solution. Successful and promising 
experimental results of multi-date automatic image registration are reported in this paper. 
1. INTRODUCTION 
The goal of image registration is to geometrically align two 
or more images so that respective pixels or their derivatives 
(edges, corner points, etc) representing the same underlying 
structure (object space) may be integrated or fused. In some 
applications image registration is the final goal (interactive 
remote sensing, medical imaging, etc) and in others it is a 
required link to accomplish high-level tasks (multi-sensors 
fusion, surface reconstruction, etc). In a multi-sensor context, 
registration is a critical starting point to combine multiple 
attributes and evidence from multiple sensors. In turn, multi- 
sensors registration or fusion can be used to assess the 
meaning of the entire scene at the highest level of abstraction 
and/or to characterize individual items, events (e.g. motion), 
and other types of data. 
The sequential steps of feature extraction, feature matching, 
and geometric transformation have evolved into a general 
paradigm for automatic image registration, (see Brown, 
1992). Many algorithms have been invented around this 
paradigm to handle the automatic image registration with a 
major focus on solving the matching (correspondence) 
problem. The basic idea behind most of these algorithms is 
to match image features according to their radiometric or 
geometric properties using a pre-specified cost function to 
assess the quality of the match; (see Dare and Dowman, 
2001; Thepaut et al., 2000; Hsieh et al., 1997; Li et al., 1995; 
Wolfson, 1990). While these methods have certain 
advantages in computing the transformation parameters in a 
single step and in retaining the traditional way of thinking 
about registration in the sense of identifying similar features 
first and then computing the parameters of the geometric 
transformation, they have considerable drawbacks in meeting 
the current challenges of image registration. First of all, they 
require feature matching, which is difficult to achieve in a 
  
* Operated by Battelle for the US Dept. of Energy (DOE) 
A - 318 
multi-sensor context since the common information, which is 
the basis of registration, may manifest itself in a very 
different way in each image. This is because different sensors 
record different phenomena in the object scene. For instance, 
take a radar image vs. optical image. Second, the feature 
extraction algorithms are far inferior in the sense of detecting 
complete image features. For instance, missing information 
such as edge gaps, and occlusion are two famous examples 
that could lead to incorrect matching 
In the late nineties and through 2001, Hough Transform 
(HT)-like approaches emerged as a powerful class of 
registration methods for image and non-image data. This new 
class of methods provides a remedy to the above-mentioned 
problems and considers different strategies to reduce the 
computational complexity that hampered the wide use of the 
original HT (Hough, 1962). In comparison to the previous 
approaches, this new class is a correspondence-less strategy 
since it does not use feature correspondence to recover the 
transformation. Instead, a search is conducted in the space of 
possible transformations. The Modified Iterative Hough 
Transform (MIHT) is a representative method that belongs to 
Hough-like approaches. MIHT is developed to solve 
automatically for different tasks such as single photo- 
resection, relative orientation, and surface matching, (see 
Habib and Schenk, 1999; Habib et al, 2000; Habib and 
Kelly, 2001*^: Habib et al., 2001). In MIHT, an ordered 
sequential recovery of the registration parameters is adopted 
as a strategy to reduce the computational complexity. This 
ordered sequential solution considers quasi- invariant 
parameters to reduce the computational complexity. These 
parameters are associated either with specific locations that 
de-correlate them or with the selection of data elements that 
contribute to specific parameter(s). The basic idea behind 
the Hough-like approaches, such as MIHT, is the exploitation 
of the duality between the observation space and the
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.