The
»patial
-based
ected,
ied by
or the
ng the
es are
in the
loying
jon, a
cation
in, the
°kBird
ve the
cation
vation
In the
ivided
d, we
pe the
nerate
"walls
aerial
e used
; then
laving
SMS
se 3D
allows
image
h are
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
2.1 Interpolation LIDAR Data
The LIDAR data includes ground point data and surface point
data. The procedure starts from resampling the two sets of
discrete points from LIDAR data into regular grid as DTM and
DSM (Briese et al, 2002) respectively. A. TIN-based
interpolation method is applied to rasterize the LIDAR data
(Behan, 2000).
2.2 Space Registration
Space registration is another preprocessing of data fusion. The
objective of space registration is to build-up the relationship
between LIDAR space and image space. We use ground
control points to build the mathematic model for space
registration. Hence, the LIDAR data and image data are
coregistered in the same georeference system.
3. BUILDING DETECTION
The objective of building detection is to extract the building
regions. There are two steps in our scheme: (1) region-based
segmentation, and (2) knowledge-based classification. The
flow chart of building detection is shown in Figure 1.
LIDAR QuickBird Aerial
(DSM/DTM)| |Orthoimage| |Orthoimage
,
Segmentation Region-based
segmentation
1
Above Elevation Ground
Ground :
Vegetation
Non-
Non- hh
: Buildin
Vegetation 9
Knowledge-based
classification
Figure 1. Flow chart of building detection.
3.1 Region-based Segmentation
There are two ways to do the segmentation. The first one is the
contour-based segmentation. It performs the segmentation by
using edge information. The second one is the region-based
segmentation. [t uses a region growing technique to merge
pixels with similar attribute (Lohmann, 2002). We select the
733
region-based segmentation because its noise tolerance is better
than contour-based segmentation. We combine elevation
attribute from LIDAR data and radiometric attribute from
orthoimages in the segmentation. The pixels with similar height
and spectral attribute are merged into a region.
3.2 knowledge-based classification
After segmentation, an object-based classification rather than
pixel-based classification is performed. Each separated region
after segmentation is a candidate object for classification. A
knowledge-based classification considering elevation. spectral,
texture, and shape information is performed to detect the
building regions (Hofmann, et. al, 2001). The LIDAR data.
QuickBird multispectral image and aerial image are integrated
in this stage. A number of characteristics of these data arc
considered to obtain the knowledge for classification. The
characteristics are described as follow.
Elevation: Subtract DTM from DSM, we will get the
normalized DSM. (nDSM), which contains the height
information above ground. It represents the objects rising from
the ground. Setting an elevation threshold one can separate the
object above ground and on the ground. The above ground
surface includes building and vegetation that are higher than the
elevation threshold.
Spectral: The spectral information comes from QuickBird
multispectral images, which contains blue, green, red, and near
infrared bands. The near infrared band gives the useful spectral
information for vegctation. A well-known Normalized
Vegetation Index (NDVI) is used to distinguish vegetation [rom
non-vegetation areas.
Texture: Several papers demonstrated that texture information
is useful for building detection (Zhang, 1999). The texture
information comes from high spatial resolution aerial images.
We use the Grey Level Co-occurrence Matrix (GLCM) for
texture analysis. This is a matrix of relative frequencies for
pixel values occur in two neighboring processing windows, in
which, we use the entropy and homogeneity to compute the co-
occurrence probability. The role of texture information is used
to separate the building and vegetation when the objects have
similar spectral response.
Shape: The shape attribute includes area and length-to-width
ratio. The area attribute can be used to filter those small size
objects. The length-to-width ratio is suitable to remove the thin
objects.
4. BUILDING RECONSTRUCTION
After extracting building region, each individual building
region is isolated. Then, we reconstruct the building models for
individual building regions. The spatial resolution of aerial
image is better than QuickBird multispectral satellite image.
Thus, we select aerial image to reconstruct the building models.
There are four steps in our schemes: (1) 3D planar patch
formings (2) initial building edge detection, (3) straight line
extraction, and (4) split-merge-shape method for building
modeling. The flow chart of building reconstruction is shown
in Figure 2.