Full text: Proceedings International Workshop on Mobile Mapping Technology

2-5-2 
The development of automated approaches to enhance the 
processing of mobile mapping image sequences using 
computational vision theories and methods has been one of the 
research focuses at The University of Calgary. The methods 
developed can be grouped into two categories: Information 
extraction and image bridging using stereo image sequences. 
These methods are described along with a discussion of the 
evaluation results using VISAT image data sets. 
2. OVERVIEW OF MOBILE MAPPING TECHNOLOGY 
Mobile mapping systems represent a significant advance in 
multi-sensor integrated digital mapping technology, which 
provides an innovative path towards rapid and cost-effective 
collection of high-quality and up-to-date spatial information 
(Bossier et al., 1991; Chapman et al., 1999; El-Sheimy, 1996; 
Hock, et al., 1995; Li, 1997; Novak, 1995; Schwarz et al., 
1993b and Tao, 1998). The development of mobile mapping 
systems is promoted by the use of multi-sensor integration 
technology. In general, we classify the sensors into three 
categories: 
(1) Absolute orientation sensors 
• Environment-dependent external positioning sensors: 
GPS, and radio navigation systems 
• Self-contained inertial positioning sensors: INS, dead 
reckoning systems, gyroscopes, accelerometers, 
compasses, odometers, and barometers 
(2) Relative orientation sensors 
• Passive imaging sensors: Video and digital cameras 
• Active imaging sensors: Laser range finders or scanners, 
and Radar (SAR) 
(3) Attribute collection sensors 
• Passive imaging sensors: video/digital frame cameras, 
multi-spectrum/hyper-spectrum push-broom scanners 
• Active imaging sensors: SAR and Laser range finders or 
scanners 
• Manual recording: voice recording and touch-screen 
recording 
• Other sensors: temperature, air pressure, gravity gauges, 
etc. 
A global coord. A local coord. Objects of 
system system interest 
Figure 1. The concept of direct georeferencing 
Absolute orientation sensors are platform-oriented. They are 
used to determine the absolute locations of the mobile mapping 
platform, for instance, a van-type vehicle, with respect to a 
global coordinate system (e.g., WGS-84). On the other hand, 
relative orientation sensors provide the positional information 
of objects relative to the platform in a local coordinate system. 
Both relative orientation sensors and attribute collection sensors 
are feature-oriented, and many have capabilities of providing 
both orientation and attribute information, such as imaging 
cameras and laser range finders/scanners. 
One of the most important concepts of mobile mapping systems 
is direct-georeferencing. The conceptual layout of direct 
georeferencing is shown in Figure 1. Direct-georeferencing 
refers to the use of absolute orientation sensors to determine the 
exterior orientation of the sensors without using ground control 
points and photogrammetric block triangulation. For example, if 
a camera sensor is used, any captured image can be “stamped” 
with the georeferencing parameters, namely three positional 
parameters and three attitude parameters by using absolute 
orientation sensors (GPS/INS). As a result, 3-D reconstruction 
using stereo images becomes straightforward, since the exterior 
orientation parameters of each image are available. Direct 
georeferencing greatly facilitates the mapping procedure. The 
rapid turn-around time of data processing and reduced cost of 
ground control surveys are very beneficial. 
3. VISUAL MOTION ANALYSIS OF MOBILE MAPPING 
IMAGE SEQUENCES 
In recent years, the computer vision community has extensively 
addressed the computational aspects of visual motion analysis. 
In this section, the general methodology used in visual motion 
studies is reviewed. The purpose is to try to identify the 
problems that we have under the framework of visual motion 
analysis, so that the well-developed methods and the 
accumulated experience can be utilized. 
3.1 Basic Issues in Visual Motion Analysis 
In principle, the study of visual motion analysis, or, motion and 
structure from image sequences, consists of two basic issues: 
• Determining image optical flow and/or feature 
correspondences from image sequences, and 
• Estimating motion and structure parameters using the 
determined optical flow and/or feature correspondences. 
In mobile mapping systems, the captured image sequences have 
been georeferenced by using GPS/INS integrated positioning 
techniques. The orientation parameters of each camera exposure 
center are determined with respect to a global coordinate 
system, i.e., the motion is known. By using techniques of 
photogrammetric intersection, the computation of 3-D object 
coordinates is very straightforward. However, conjugate points 
need to be identified, that is, point (feature) correspondence 
needs to be established. Therefore, our research emphasis will 
be placed on the first issue - determination of correspondences 
from image sequences. 
In fact, the photogrammetry community has been paid a great 
amount of attention to the second issue. The research results 
from computer vision studies not only enhance the 
understanding of projective geometry and algebraic geometry, 
but also extend and enrich photogrammetric theories and 
methods, for example, videogrammetry. Some of the concepts, 
theories and methods derived by computer vision groups have 
been investigated by photogrammetrists and applied in
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.