ISPRS Commission III, Vol.34, Part 3A ,,Photogrammetric Computer Vision", Graz, 2002
FUSION OF LIDAR DATA AND AERIAL IMAGERY FOR A
MORE COMPLETE SURFACE DESCRIPTION
Toni Schenk
CEEGS Department
The Ohio State University
schenk.2@osu.edu
Bea Csathó
Byrd Polar Research Center
The Ohio State University
csatho.1@osu.edu
Commission Ill, Working Group 6
KEY WORDS: Fusion, Lidar, Aerial Imagery, Surface Reconstruction, DEM/DTM
ABSTRACT
Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR
emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential
results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface
information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface
reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a
common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features.
Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched
sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with
sensor specific features that are related to surface characteristics. We show the synergism between these features resulting
in a richer and more abstract surface description.
1. INTRODUCTION
It has long been recognized that surfaces play an impor-
tant role in the quest of reconstructing scenes from sensory
data such as images. The traditional method of reconstruct-
ing surfaces is by photogrammetry. Here, a feature on the
ground, say a point or a linear feature, is reconstructed from
two or more overlapping aerial images. This requires the
identification of the ground feature in the images as well as
their exterior orientation. The crucial step in this process is
the identification of the same ground feature. Human oper-
ators are remarkably adept in finding conjugate (identical)
features. DEMs generated by operators on analytical plot-
ters or on softcopy workstations are of high quality but the
process is time and cost intensive. Thus, major research
efforts have been devoted to make stereopsis an automatic
process.
Recently, airborne and spaceborne laser altimetry has
emerged as a promising method to capture digital eleva-
tion data effectively and accurately. In the following we use
LIDAR (LIght Detection And Ranging) as an acronym for the
various laser altimetry methods. An ever increasing range
of applications takes advantage of the high accuracy poten-
tial, dense sampling, and the high degree of automation that
results in a quick delivery of products derived from the raw
laser data.
Photogrammetry and LIDAR have their unique advantages
and drawbacks for reconstructing surfaces. It is interest-
ing to note that some of the shortcomings of one method
can be compensated by advantages the other method of-
fers. Hence it makes eminent sense to combine the two
methods—we have a classical fusion scenario where the
synergism of two sensory input data considerably exceeds
the information obtained by the individual sensors.
In Section 2 we elaborate on the strengths and weaknesses
of reconstructing surfaces from LIDAR and aerial imagery.
We also strongly advocate an explicit surface description
that greatly benefits subsequent tasks such as object recog-
nition and image understanding. Useful surface characteris-
tics are only implicitly available in classical DEMs and DSMs.
Explicit surface descriptions are also very useful for fusing
LIDAR and aerial imagery.
0.0
geometric & semantic
feature extraction
sensor-invariant
feature extraction
Y / Y
feature-based
feature correspondence
fusion
common reconstructed
km
reference frame 3D surface
Figure 1: Flow chart of proposed multisensor fusion framework.
Fig. 1 depicts the flowchart of the proposed multisensor fu-
sion framework. Although we consider only LIDAR (L) and
aerial imagery (A) in this paper, the framework and the fol-
lowing discussions can be easily adopted for including ad-
ditional sensors, such as a hyperspectral system, see e.g.
Csathó et al. (1999). The processes on the left side of Fig. 1
are devoted to the establishment of a common reference
frame for the raw sensory input data. The result is a unique
transformation between the sensor systems and the refer-
ence frame. Section 3 discusses this part of the fusion prob-
lem in detail.
The processes on the right side of the figure are aimed at
the reconstruction of the 3D surface by feature-based fusion.
This task benefits greatly from having the sensory input data
(Land A’) aligned. Since the reconstructed surface is de-
scribed in the common reference frame, it is easy to go back
A - 310