In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium - 100 Years ISPRS, Vienna, Austria, July 5-7, 2010, IAPRS, Vol. XXXVIII, Part 7B
327
DATA-DRIVEN ALIGNMENT OF
3D BUILDING MODELS AND DIGITAL AERIAL IMAGES
J. Jung, C. Armenakis*, G.Sohn
Department of Earth and Space Science and Engineering
Geomatics Engineering, GeoICT Lab
York University, Toronto, Canada
{jwjung} {armenc} {gsohn}@yorku.ca
Commission VII, WG VII/6
KEY WORDS: Data fusion, registration, building models, digital image, similarity assessment, updating
ABSTRACT:
Various types of data taken from different sensors or from different viewpoints at different times are used to cover the same area.
This abundance of heterogeneous data requires the integration and therefore the co-registration of these data in many applications,
such as data fusion and change detection for monitoring of urban infrastructure and land resources. While many data registration
methods have been introduced, new automatic methods are still needed due to increasing volumes of data and the introduction of
new types of data. In addition, large-scale 3D building models have already been constructed for mapping or for generating 3D city
models. These valuable 3D data can also be used as a geometric reference in sensor registration process. This paper addresses data
fusion and conflation issues by proposing a data-driven method for the automatic alignment of newly acquired image data with
existing large scale 3D building models. The proposed approach is organised in several steps: extraction of primitives in the 3D
building model and image domains, correspondence of primitives, matching of primitives, similarity assessment, and adjustment of
the exterior orientation parameters of the images. Optimal building primitives are first extracted in the existing 3D building model
using a priority function defined by the orientation of building, complexity of building, inner angles of building, and building
geometric type. Then the optimally extracted building primitives are projected into image space to be matched with extracted image
straight lines data sets followed by a similarity assessment. For the initial localization, the straight lines extracted in the digital image
are assessed in the search area based on their location with respect to the corresponding optimal building primitives. The location of
the straight line having the highest score is determined. In that designated area location, new straight lines are extracted by
weighting straight lines representing each vector of optimal building primitives. The corresponding vertices of the optimal building
model are determined in the image by the intersection of straight lines. Finally, the EO parameters of the images are efficiently
adjusted based on the existing 3D building model and any new image features can then be integrated in the 3D building model. An
evaluation of the proposed method over various data sets is also presented.
1. INTRODUCTION
With the recent advancements in remote sensing technology,
various types of data taken from different sensors or from
different viewpoints at different times are used to cover the
same area. This abundance of heterogeneous data requires the
integration and therefore the co-registration of these different
data sets in many applications such as detection of changes in
the urban infrastructure and mapping of land resources. While
many data registration methods have been introduced, new
automatic methods are still needed due to the increasing volume
of data and the introduction of new types of data. Zitova and
Flusser, 2003 presented a comprehensive survey of image
registration methods, while Fonseca and Manjunath, 1996
compared registration techniques for multisensory remotely
sensed imagery and presented a brief discussion of each of the
techniques. Habib et al., 2005 introduced alternative approaches
for the registration of data captured by photogrammetric and
lidar systems to a common reference frame. However, most
studies aim to register images with other sensors data such as
lidar and SAR data sets. Although large-scale 3D building
models have been already generated in Google Earth, of Google
and Virtual Earth of Microsoft, the application of the building
information is limited to a secondary role for text-based data
search. However, these valuable 3D data can be also used as a
geometric reference in sensor registration process. Therefore,
this paper addresses data fusion and conflation issues by
proposing a data-driven method for the automatic alignment of
newly acquired image data with existing large scale 3D
building models. Also, while existing 3D building models have
inherent errors, in this study we assume that the existing 3D
building models are free of any geometric errors and that the
exterior orientation parameters of image are to be adjusted
using the 3D building model as reference control data. This
paper is organized into four parts. In section 2, we address the
proposed new registration method, section 3 deals with the
evaluation of the approach, and conclusions are given in section
4.
* Corresponding author.