I
:
unsupervised method was based on a visual inspection.
Where there are many classes, there is the problem of a
class being split into more than one class due to the
spectral differences within a class. Where there are only
a few classes, there is the problem of unrelated classes
being classified as the same class.
Figure 3: Results of unsupervised classification —
ISODATA method — 12 classes
Figure 4: Results of unsupervised classification —
ISODATA method — 5 classes
It is thus concluded that the pixel-based approach is
not acceptable for classifying complex urban
environments with very high resolution remote
sensing data. The reasons for this are as follows
(Hurskainen & Pellikka 2004):
€ Pixels do not sample the urban environment at the
spatial scale to be mapped
€ Building are represented by groups of pixels which
should be treated as individual objects
® Buildings produce a wide range of spectral
signatures
e Many features in the urban environment appear
spectrally similar
OBJECT-BASED CLASSIFICATION
The limitation of the pixel in tackling issues of location,
scale and distance has caused a shift towards object-
based classification (De Dapper et al. 2006). Even
though traditional pixel-based classifiers are well
developed and there are sophisticated variations, they do
not make use of available spatial concepts. The need for
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B7, 2012
XXII ISPRS Congress, 25 August — 01 September 2012, Melbourne, Australia
context-based algorithms and object-oriented image
processing is increasing and it is hypothesized that
object-based image analysis will initiate new
developments towards integrating GIS and remote
sensing functions (Blaschke et al. 2000).
The software used for object-based classification in this
study is eCognition. A necessary prerequisite for object-
based image classification is image segmentation. The
shape of segments derived in eCognition is determined
by the following parameters (Hofmann 2001):
e Weight of image channels: specify the weight of
each spectral band in the segmentation. Channels
with higher weights have a greater influence on
object generation.
e Scale parameter: influences the average object size.
This parameter determines the maximum allowed
heterogeneity of the objects. The larger the scale
parameter, the larger the objects become.
e Colour/Shape: the influence of colour vs. shape can
be adjusted. The higher the shape value, the less
spectral homogeneity influences the object
generation.
e Smoothness/Compactness: These are attributes of
the “shape” criterion. If the shape criterion is larger
than 0, the user can determine whether objects shall
be more compact or more smooth.
e Level: determines whether a new generated image
level will either overwrite a current level or whether
the generated objects shall contain sub or super
objects of an existing level. The order of generating
the levels affects the objects’ shape (top-down vs.
bottom-up segmentation).
Using spectral information for image segmentation
In the first strategy, the image was segmented using the
multiresolution segmentation algorithm in eCognition.
All four image layers (red, green, blue and NIR) were
used with equal weighing in the segmentation process.
The size of segments was decided on by trial and error.
Smaller segments were merged to create larger segments
that consisted of built-up areas as opposed to individual
buildings. These built-up areas consisted of residential
buildings, gardens, roads, etc. It was difficult to obtain
suitable segments using only spectral information. The
segments were not uniform in shape and size, and some
contained a mixture of classes that was not ideal. Some
segments appeared homogenous in nature, but did not
logically represent features in an image.
In the Figure 5a, it can be seen that the selected segment
contains a building, a portion of a road and some trees.
These segments were created from initially smaller
segments with scale parameter of 50 (Figure 5b), which
were then used as the input into a multiresolution
segmentation to create segments with a scale parameter
of 100 (as seen in Figure 5a). An initial segmentation of
100 results in slightly different segments as can be seen
in Figure 5c.