International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B7. Istanbul 2004
5, OBJECT BASED CLASSIFICATION
With an object-based classification, eCognition 3.0 by
Definiens Imaging, it is possible to simulate human perception
with classification rules. Simulation works selecting object of
interest in the segmentation layer: in the pre-event image there
are buildings, in the post-event image there are large damaged
areas (hot spots of building collapsed and debris).
The same images mentioned in the previous paragraph were
used in this test.
The first operation performed in eCognition is the segmentation
of post-event image, contrary to manual classification where
the pre-event image is segmented. The definition of object of
interest means to choose which layer to segment. Segmentation
of post-event image seems to be a valuable solution, because
large damaged areas are present only on this image.
Using scale it is possible to recognize different objects at
different scales in the image. For instance, it is possible to
separate urban areas using a texture value (for example object
standard deviation) or vegetation using the NDVI index, or
other index like in the case of ERDAS decision tree classifier.
The level hierarchy is composed by two levels at least, the
texture (level 3, scale 60-120) and the urban built (level 2, scale
15-30). In the urban level, it is necessary to define how an
object is perceived as changed. Human eye is able to recognize
reflectance changes without considering shadows; contrariwise,
using” image differencing techniques, an increasing in
reflectance could happens when a building falls across a
shadow in the other image. Thus, a sublevel of urban level
(level 1) is created, to simulate the possibility of human eye to
recognize reflectance in non-shadowed zones. The final level
hierarchy structure is summarized below, in Figure 4 and in
Table 5.
Level | represent a classified layer of shadow and saturated
‘objects. In fact, Quickbird imagery presents some artifacts, or
saturated zones, that should be removed to calculate damage
index only on meaningful pixels.
Level 2 is the object of interest level: in this level a change
index, such as maximum absolute difference on multispectral
images or ratio post/pre event in panchromatic, is calculated.
There is a hierarchical relation between this level and the level
below: in fact, the index will be calculated only for meaningful
pixels (not shadowed and no saturated).
In the Level 3, with the largest scale factor, vegetation and
texture index could be used to separate region of interest, such
as urban areas, and to avoid vegetated area from classification.
Large damaged areas, such as flooded areas in the Marmara
earthquake, can be well separated on this level.
6. ACCURACY ASSESSMENT
The accuracy assessment of automatic change detection in very
high-resolution imagery suffers from problems derived by
geometric issues and methods of interpretation of the results,
and is quite difficult to consider separately each question.
Geometric problems arise from image registration, image
resolution and off-nadir effects. Result evaluation is related to
how the percentage is calculated, and how to take into account
false alarms. The percentages can be calculated principally on
two ways: in terms of total built area (and how it is determined)
and in terms of number of buildings correctly individuated by
the classification procedure, or missed. A building is identified
as damaged if at less 10% of its area is classified as damaged.
e object hierarchy
BE level 1
&-4 no shadow (1)
: 1 exclude (1)
@ no exclude (1)
@ shadow (1)
= level 2
@ no urban (2)
5-88 urban (2)
BD shadow (2)
E-@ no shadow (2)
8) damage (2)
(C) no damage (2)
E level 3
@ urban (3)
> no urban (3)
696
Figure 4. Object rules in eCognition, Quickbird image
Data Set Level 1 | Level2 | Level3
e Quick Bird
Scale: ] 30 60
Color, Smoothness 1 0.1, 0.0 0.1, 0.9
e [RS
Scale: 1 5 30
Color, Smoothness 1 0.7,0.9 07,00
Table 5. Multiresolution segmentation parameters
Using object-oriented or pixel-oriented classification, results
show some differences (Table 6). First at all, without very
precise image registration (obtained with 2494 control points),
pixel-based approach shows not stable results. Contrariwise, an
object-oriented approach can detect some changes even if the
building geometry isn't corrected, and the results shows a
significant increasing using all control points to reduce relief
displacement.
An analysis of the percentage of correct classification of
building damage in respect to the whole damaged area detected
by the automatic classifier, and in particular how much
classified damage falls into building perimeters, shows that
“false alarms” are widespread; this fact is obviously primarily
due to the large amount of debris laying around buildings after
a strong earthquake. The entity of this phenomenon could be
reduced by the availability of cartographic large-scale vector
basemaps or by strict procedures for building classification and
extraction. In any case the significance and the actual
Ini
inf
fui
Thi
mac
sen:
Thi
Sur?
fror
abo
dam
abst
geo
regi
Abc
obje
base
to o
the
regi
Alta
Sest
Syst
eartl
Sens
Bite
orier
class