ysis, remote
multi-image
ly such that
grammetric
ed building
exploitation
and image
] techniques
jective of all
useful for a
ree areas to
others, and
ding of such
rces of data,
ipparent that
what can be
traction and
be, the most
roblem in an
xive, MURI,
eir speciality
Edward M. Mikhail
areas are depicted in Figure 1: Photogrammetry and Remote Sensing within Purdue University, the University of
Southern California, and BAE SYSTEMS (formerly GDE then Marconi) as an industrial partner. The title of the
project is: Rapid and Affordable Generation of Terrain and Detailed Urban Feature Data.
The overall vision relative to which the MURI Research
Center has been established is shown in Figure 2. Figure
3 illustrates the approach taken by the Center and the
interaction between the Team members to accomplish the
vision. The primary goal of the research is to
economically construct an accurate three-dimensional
database suitable for use in a visualization environment.
Many sources of data are considered: hand-held, aerial,
and satellite frame imagery, including video, both
panchromatic and color; multi-spectral and hyper-spectral
imagery; SAR, IFSAR and LIDAR data; and other
information sources such as digital elevation models.
Because of the diversity of the sources, rigorous
mathematical modeling of the various sensing systems is
imperative in order to accomplish accurate registration
and fusion. Section 2 is therefore devoted to this aspect of the research, followed by the important task of spatial
feature extraction in section 3, and database construction and visualization in section 4. The paper ends with section 5
on conclusions and recommendations for future research directions.
| Approach: Research by an Integrated.
Team of Complementary Expertise -
IMAGERY —— : 0D
EO MS/HS Mage | ; |
Costa | Training and Technology Trans fei |
Figure 3. MURI Approach
2 SENSOR MODELING AND MULTI-SOURCE REGISTRATION
Since several different data sources are considered as input to the feature extraction module, it is imperative that they
are "registered" with respect to each other and relative to the terrain object space. In the case of imagery, registration
means that the mathematical model of the sensor acquiring the imagery is rigorously constructed and recovered . Two
types of passive sensors will be discussed: Frame and Push-broom, each of which will be discussed in a separate
subsection. Accurate sensor models are also important for the generation of digital elevation models which are not only
a product in their own right, but are also used in support of other tasks such as hyperspectral image classification and
cultural feature extraction as will be discussed later.
2.1 Modeling For Frame Singles and Sequences
Frame imagery has been the most common form and its modeling has therefore been discussed extensively in the
photogrammetric literature over the years. Each frame is assigned six exterior orientation (EO) elements, and usually
three geometric interior orientation (IO) elements. When considering uncalibrated digital cameras, the IO elements are
often augmented by several more parameters that account for some or all of the following: skewness, differential scale,
and radial and decentering lens distortion coefficients. These are explicitly carried as parameters in the pair of
photogrammetric collinearity equations for each ray. Since the equations are non-linear, it is important to have
reasonable approximations for the unknown parameters. Such approximations are sometimes difficult to obtain
particularly for unusual image acquisition geometries of oblique aerial photography and hand-held imagery. The linear
invariance-based formulation is useful for quickly deriving approximations for camera parameters. One formulation is
for a pair of overlapping images in which the image coordinates are related by the Fundamental Matrix, F, or
[x, Yi LF [x, Ya IF 0 (1)
Although F has 9 elements, only 7 are independent. As an example, this technique is applied to a pair of convergent
video frames, Figure 4. After F is estimated, relative camera transformation matrices for each of the two video frames
can be extracted from the fundamental matrix. Then projective model coordinates can be computed for any known
ground control point visible on the two frames. Using the known ground control point coordinates in the 3D orthogonal
system and their corresponding projective model coordinates, fifteen elements of the 4x4 three-dimensional projective
transformation matrix can be estimated. Now the true camera transformation matrices are computed by multiplying the
relative camera transformation matrices by the projective transformation matrix. Finally, the real camera parameters
are extracted from the camera transformation matrices.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 593