International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
2. IMAGE PROCESSING AND GIS TOOLS IN
MAPPING OPERATIONS
There is a significant availability of photogrammetic, image
analysis and GIS tools and functionality. The complexity of
mapping operations has not yet led to a general approach for the
various procedures. However, using the appropriate existing
tools it is possible to enhance the operations, to obtain better
quality and type of results and to introduce semi-automated
approaches. The following sections will present several image
analysis/processing and spatial analysis tools and their
contributions to the operations of feature recognition, feature
extraction and change detection.
2.1 Feature recognition
For the extraction of information from images, the various
objects have to be identified through the process of
interpretation of the image patterns. The increased availability
of multispectral digital data offered by the new sensors allows
for “automated” interpretation using spectral pattern recognition
and image transform techniques. The simultaneous acquisition
of panchromatic and multispectral data allows in addition for
implementation of image fusion techniques.
Pixel classification methods allow for the spectral pattern
recognition resulting in various thematic categories by classified
similar pixels in the same thematic class. The training of the
algorithmic classifiers and interpretation of the resulting clusters
is done based on human knowledge (e.g., training areas,
interpretation of pixel clusters).
In the last few years we have seen the availability of object-
oriented image analysis systems, where the basic processing
units are image objects and not pixels (eCognition, 2003; Hay et
al, 2003; Walter, 2004). The objects are derived through a
multi-resolution segmentation based on fuzzy logic
classification approaches. The resulted image objects represent
the object information from the various image scale levels. The
objects in these levels are connected in a hierarchical manner,
while each object is also relates to its neighbouring objects. The
end result is based on the object class hierarchical inheritance
and object aggregation processes.
Another tool for thematic classification is the use of two
spectral transformations, which modify the spectral space. The
first is the Normalized Density Vegetation Index (NDVI),
which is the modulation ratio between the NIR and red bands
(Schowengerdt, 1997), and can be used to show vegetation
variations or changes appearing in the image. The second is the
“Tasseled Cap” (Mather, 1987) spectral band transformation,
which is designed for the enhancement of the vegetation cover
density and condition. The multispectral bands are used in order
to compute three parameters called brightness, greenness and
wetness. Brightness is a weighted sum of visible and NIR
(VNIR) bands and expresses the total reflection capacity of a
surface cover. Small areas dominated by dispersed vegetation
appear brighter (high total reflection). Greenness expresses the
difference between the total reflectance in the near infrared
bands and in the visible bands and has been shown to be
moderately well correlating to the density of the vegetation
cover. Wetness expresses the difference between the total
reflection capacity between the VNIR bands and the short wave
infrared (SWIR) bands, and is more sensitive to moisture
surface content.
The interpretability of an image can be enhanced through an
image fusion (also called sharpening) process (Armenakis, et
al., 2003; Forsythe, 2004). Image fusion implies the merging of
the higher resolution panchromatic band with the lower
resolution multispectral bands. The aim of the fusion is to take
advantage of both the higher resolution and multispectral
content and to transfer the high frequency content of higher
resolution panchromatic image to the lower resolution
multispectral image. The result of the fusion is an enhanced
multispectral or synthetic imagery of the higher resolution.
Various methods for image fusion, such as IHS (Intensity-Hue-
Saturation), PCA (Principal Component Analysis), band
substitution, arithmetic and Brovey (Pohl and Touron, 2000;
Cavayas et al., 2001; Wang et al., 2003), have been applied to
enhance the identification of various features.
22 Feature extraction
For the primary data acquisition we will address here only the
collection of data in mono-mode and we will not address tools
and techniques for stereo-mode data extraction. Therefore, we
will present only the case of extracting planimetric data from
image type data sources, scanned maps included. Usually, the
images are orthorectified and the scanned maps are
georeferenced.
The extraction of objects from imagery is generally based on
two characteristics of the pixel digital number values: a) the
similarity and b) the difference of adjacent pixel values. In other
words how the discontinuity of pixel grey values is treated and
when the abruptions of the intensity values based on certain
criteria are significant or not to indicate a boundary between
different image features. In addition, the type of feature is
considered, that is if we are interested in the extraction of linear
or polygonal features. ‘A-priori’ knowledge or other cues that
might exist and can be applied as additional conditions during
the feature extraction operations can enhance the extraction
procedures.
The property of pixel similarity was discussed also in the
section of feature recognition. Therefore the use of pixel
classification methods to segment the image regions in thematic
polygons is also a tool for extraction of these polygonal
features. If their boundaries are required for vector type of data,
they can be extracted and then vectorized via an R=>V
conversion. The object oriented image classification approach is
included in this group.
Thresholding is another extraction method. It is simple and the
similarity criterion is based on a range of grey values belonging
to the feature of interest, which are used as threshold to separate
it from the background image data. It is usually applied on
scanned monochrome maps where the map elements are
distinguished well from the general background, or on grey
images, for example on a NIR Band 5 of Landsat 7 of an ares
with many water bodies, where the histograms are bi- or multi-
modal and can be partitioned by a single or multiple thresholds
(Armenakis et al., 2003).
Polygonal image regions can be extracted using their texture
description (Haralick, 1979; Zhang, 2001; Kachouie, 2004).
Texture represents fineness and coarseness, roughness, contrast,
regularity, directionality and periodicity in image patterns.
Texture measures can be expressed in terms of variance, mean,
entropy, energy and homogeneity of the kernel image window.
They can be used to examine the spatial structure of the grey
612
Inter:
value
varlat
positi
level
varlaı
value
grey
occur
regio
struct
When
detect
the p
dimer
edge-
deriv:
pixel
of spe
as po:
most
and tl
to noi
single
for m
1986)
detect
edge :
to a si
Finall
the in
part o
better
high «
accur:
detern
charac
succes
from i
such
classi
extrac
1997;
constr
inters«
operat
Currei
autom
of exi
buildii
2004;
2.3 €
Chang
datase
patteri
condu
image
condu
detecti
The f
while: