COMMON ADJUSTMENT OF LAND-BASED AND AIRBORNE MOBILE MAPPING
SYSTEM DATA
Taher Hassan, and Naser El-Sheimy
Mobile Multi-sensor Research Group
Department of Geomatics Engineering,
The University of Calgary
2500 University Dr. N.W. Calgary, Alberta, Canada T2N 1N4
Tel: (403) 220 7587, Fax: (403) 284 1980
E-mail: tfabbas@ucalgary.ca, and elsheimy@ucalgary.ca
KEY WORDS: Photogrammetry, Mapping, Bundle Adjustment, Fusion, Georeferencing, Mobile Mapping, and Triangulation.
ABSTRACT:
Presently, numerous types of geospatial data have become available for both industrial and research use. These types of data are
collected by different sensors and from different platforms. Therefore, they are substantially different in physical nature,
amount/type of information, and geometric/radiometric resolutions. Multi-sensor data fusion techniques combine data from multiple
sensors and related information from associated databases, to achieve improved accuracies and more specific inferences than could
be achieved from a single sensor. Many researchers have proposed different fusion schemes for integrating different sensory data.
Yet, there is less amount of attention towards the fusion of airborne (AMMS) and land-based (LMMS) mobile mapping systems
imagery/navigation data. Although, this integration scheme may be thought to be simple or similar to other optical-to-optical
registration process, its practical implementation carries many challenges. Images captured by (LMMS) and (AMMS) are different
in the sense of direction, scale, coverage, hidden/visible features. Consequently, the integration between the data captured by
AMMS and LMMS is of high potential since both image/navigation data sets are complementary and can be integrated to complete
the picture about the earth’s surface. This paper proposes a fusion scheme for the overall objective of improving the 3D mapping
accuracy. This fusion scheme aims at creating a unique model, which can be visualized in several contexts of application.Also, the
common adjustment of terrestrial and aerial photogrammetric networks recovers/enhances sensors georeferencing information-a
way for LMMS bridging during GPS outages or georeferencing aerial images block. The proposed integration framework uses
different matching entities of lines (e.g. road edges and lane lines) in addition to the traditionally-used point-based approach.
Mathematical model of collinearity condition has been adapted to suite multi-camera system. In this paper, we consider the scientific
and the technical issues about the strategy of the proposed fusion scheme. The used modalities will be coming from the simulated
VISAT LMMS platform and other sources.
1. INTRODUCTION
The number of sensors, orbiting our globe, is growing steadily
over the years. Data, collected by such sensors and from
different platforms, are considerably different in physical nature,
amount/type of information, and geometric/radiometric
resolutions. It is firmly confirmed through plethora of
researches that no single sensory data can provide the optimal
solution for a specific query. Consequently, data
fusion/integration is considered for a better solution/information
recovery about the object being captured. The topic of multi
sensor data fusion has received a lot of attention over the years
by the scientific community for military and non-military
applications. Many researchers have proposed different fusion
schemes for integrating different sensory data. However, there
is less amount of attention towards the fusion of
imagery/navigation data of airborne and land-based mobile
mapping systems.
Mobile Mapping Systems (MMS) can be classified according to
the physical carrier into land (LMMS), and airborne (AMMS)
based systems. Images captured by (LMMS) and (AMMS) are
different in the sense of direction, scale, coverage,
hidden/visible features. Consequently, the integration between
the data captured by the two systems is of high potential hence
both image/navigation data sets are complementary and can be
integrated to complete the picture about the earth’s surface.
In this paper, we introduce a novel scheme for the fusion
AMMS and LMMS data through adapting many of the existing
tools to fit the special requirements of such fusion scenario. The
investigation and the drawn conclusions are based on
simulating modalities from the VISAT platform and other
sources. This research aims at creating a unique model, e.g.
facets-based, which can be visualized in several contexts of
application and serves different scientific communities. In
section 2, the general benefits of the data fusion are listed, and
some exampled are illustrated. Section 3 is devoted to discuss
the potential for the proposed fusion scheme. Both sections 4
and 5 describe the math models for matching entities after
applying the necessary adaptations to fit the proposed fusion
scheme. Section 6, draw the main special requirements and
features of the fusion scheme framework. The preformed
simulations are described in section 7 as well as the obtained
results. Conclusions are drawn in section 8.
835