cal survey of
yplication.
ngress. 26-
FAST RECOVERY OF IMAGE ORIENTATION
USING VIRTUAL URBAN MODELS
Charalampos Georgiadis, Anthony Stefanidis, Peggy Agouris
Dept. of Spatial Information Science and Engineering
National Center for Geographic Information and Analysis
University of Maine,
348 Boardman Hall, Room
Orono, ME 04469-5711, USA
{harris, tony, peggy} @spatial.maine.edu
Commission V, WG: V/2
KEY WORDS: Modelling, Close Range, Motion Imagery, Virtual Models, Orientation, Change Detection.
ABSTRACT:
As image collection is moving from static frames to GPS-enabled motion imagery, and from off-line desktop analysis to location-
based computing we are faced with the challenge of developing rapid techniques to analyze large volumes of image datasets. In this
paper we present a novel hierarchical approach to recover rapidly the orientation of motion imagery in urban areas. We proceed by
comparing object configurations depicted in this imagery to potential configurations that can be formed using the virtual database.
By considering abstract properties such as distances between objects and their relative positions, we produce a fast algorithm that
supports the rapid recovery of image azimuths with accuracies on the order of 1 degree. This is an essential step to support the use of
motion imagery for the detection of changes and the subsequent updating of virtual models of large urban scenes.
1. INTRODUCTION
During the last few years we have witnessed major
advancements in the development of virtual reality (VR) models
of large-scale complex urban scenes. Research activities at
leading research centers and even the commercial sector have
resulted in the creation of virtual models so complex and
impressive that make it hard to recall that only a few years ago
image analysis efforts were concentrated on modeling a couple
of cubes and a toy in a lab. The development of realistic virtual
models of urban environments has become the focus of
substantial research efforts, integrating image analysis,
databases, and GIS expertise.
Virtual models of urban scenes are photorealistic: they provide
views of the world very similar to the ones we would perceive if
we were to roam the scene, sometimes even to the point of
including graffiti on the walls and advertisement banners.
However, these models are not temporealistic: the real world is
in flux, yet these models represent only a single instance of the
scene, namely the moment when the images used to create them
were actually collected. Considering the high cost to actually
build such models, their updating has rarely been a priority.
This lack of temporal validity has hindered the use of virtual
models as GIS databases, even though they convey geospatial
information and their expressive power is sought after in the
GIS community.
In order to overcome this problem the photogrammetric
community is faced with the challenge of devising efficient
techniques to support fast and accurate change detection in
large-scale urban environments. Towards this goal we are
assisted by advances in sensor technology and wireless
communications. These advances have facilitated the evolution
of image collection from static frames to GPS-enabled motion
imagery (MI), and from off-line desktop analysis to location-
based computing. This enables a data collection modus operandi
whereby a GPS-enabled sensor is roaming a scene collecting
ground-level motion imagery. In the context of this paper we
use the term motion imagery to refer to imagery collected at
video rates, or even as select frames captured a few seconds (or
even minutes) apart, using either a video or a still camera.
Examples of such datasets include imagery collected by hand-
held cameras captured while roaming an urban environment, or
imagery collected by a network of fixed sensors (e.g.
surveillance cameras) monitoring a scene.
The transition from static frames to motion imagery datasets
imposes computational challenges associated with the large
dataset volumes and the high temporal frequency. Figure 1
shows a framework for the use of motion imagery to detect
changes and update VR models of urban scenes. It is a coarse-
to-fine approach, characterized by the hierarchical comparison
of spatial information captured in MI datasets to the
corresponding information as it is modelled in the virtual
database. Sensor location is provided by GPS, allowing us to
tag each frame with coordinate information and a time stamp.
Angular orientation information is recovered in a two-step
process, simulating the human orienteering approach. We start
by comparing an image to a panoramic view of the area around
the sensor location to identify azimuth. The input in this process
is the 3-D VR database, and the MI feed. Due to the nature of
sensors we deal with (hand-held or vehicle-mounted cameras),
we are only interested in the calculation of the image azimuth,
defined as the rotation angle around the Z-axis of the local
geodetic coordinate system. Once the approximate azimuth
information is recovered we are able to proceed with precise
matching (image-to-VR model) and traditional
photogrammetric techniques to solve for the orientation
parameters. Using this information we proceed with employing
established change detection techniques to identify changes in
buildings and other similar features [Agouris et al., 2000, 2001].
—161—