duction to
apman and
nsing, John
John Wiley
, A Korean
‚stem, AP-
Vlultimission
os, the 7th
ites
of Remote
& Sons
in Remote
- Volume 1
ystem, The
mera Head,
A SURVEY ON BOUNDARY DELINEATION METHODS
Mathias J.P.M. Lemmens
Faculty of Geodetic Engineering
Delft University of Technology
The Netherlands
lemmens@geo.tudelft.ni
Commision Ill, Working Group 3
KEY WORDS: Photogrammetry, Remote Sensing, Feature, Edge, Extraction, Status, Theory
ABSTRACT
The importance of boundary delineation is indicated by the large amount of literature devoted to the topic. Although subject
of intensive research the last three decades the problem is still poorly understood and largely unsolved. Main reasons for failing
are that the image models underlying the design of these schemes form a poor description of the actual data set, and that
the relationship between data and required information can be modeled only very weakly. The aim of the present paper is to
structure the massive volume of edge detection approaches and to arrive at insight into their major merits and shortcomings.
1 Introduction
The role of delineation of boundaries is crucial for a broad
range of geo information related activities, such as semi-
automatic mapping, GlS-updating, stereo-matching, and
object-based multispectral classification. Anyone who has
been involved in the extraction of objects from unrestricted
scenes, such as recorded by aerial and space imagery, will have
encountered difficulties with obtaining reliable object outlin-
ings. Indeed, one of the key problems that makes realization
of the above tasks so hard is the outlining of boundaries. Al-
though several attempts have been undertaken to put edge
detection in a more rigorous mathematical framework, in-
cluding: Brooks (1978), Marr & Hildreth (1980), Haralick
& Watson (1981), Hildreth (1983), Canny (1986), Nalwa &
Binford (1986), and Torre & Poggio (1986), a coherent the-
ory could not be developed. No general algorithms which can
be applied successfully on all types of images, have emerged.
The relative merits and characteristics of the many individual
methods when applied to unrestricted real-world scenes are
not at all clear. Numerous legends circulate about the rel-
ative merits of different operators (Fleck, 1992). Therefore,
the choice of a particular edge detection scheme seems to
be more based on the appreciation and preoccupation of the
user than on the real capabilities of the scheme. Our aim is to
structure the existing methods and to examine their merits,
based on our extensive experience on the subject (Lemmens,
1996). Existing surveys can be subdivided into those solely
devoted to edge detection, including: (Davis, 1975; Levialdi,
1981; Peli & Malah, 1982) and the ones which discuss seg-
mentation more generally, including: (Fu & Mui, 1981; Har-
alick & Shapiro, 1985; Pal & Pal, 1993). Furthermore regular
textbooks (e.g. Rosenfeld & Kak, 1982; Pratt, 1991; Ballard
& Brown, 1982; Davies, 1990) present introductions. Seg-
mentation schemes may be divided into three main categories
(Fu & Mui, 1981; Sonka et al., 1993): (1) characteristic fea-
ture thresholding or clustering, (2) region extraction, and (3)
edge finding. We focus here on edge finding, and more specif-
ically, on local edge detection schemes.
2 Edge Finding Process
Basicly edge finding schemes consist of (1) edge detection,
and (2) edge localization. The edge detection part, which is
the hard problem and is therefore considered here solely, con-
435
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
sists of four steps: (1) smoothing, (2) local edge detection,
(3) thresholding and thinning, and (4) edge linking.
In general, local edge detection is based on some form of
differentiation of the local grey value function. Since dif-
ferentiation is a mildly ill-posed problem (Torre & Poggio,
1986) smoothing is often applied beforehand for regulariza-
tion purposes. Nevertheless, smoothing should be avoided,
when possible, since linear smoothing tends (1) to blur the
weak edges, (2) to reduce the localization accuracy, and (3)
to merge closely spaced edges, while non-linear smoothing fil-
ters, such as the Kuwahara and the median filter, tend to dis-
locate edges and to remove thin, line-shaped objects such as
roads. Furthermore, smoothing introduces correlation among
the observations which may deteriorate the performance of
subsequent processing steps.
Thresholding is a decision process in which the label edge or
non-edge is assigned to each pixel, based on the response of
the local edge detector. Usually the response is tested against
one or more prespecified thresholds. These thresholds may be
determined on an heuristical basis or by a quantification of
image disturbances such as noise.
Due to the spatial extent of local edge operators, the ini-
tial edge map is in general not one pixel thick. Thinning is
necessary to obtain one pixel thick ourlinings. One of the
possibilities is to use, after thresholding, a skeletonizing algo-
rithm to erode the thick edges. To obtain higher localization
precision one may use, before thresholding, non-maximum
suppression to exclude a pixel as edge if its edge response is
lower than those of the neighbouring pixels located perpen-
dicular to its gradient direction. The disadvantage is that
junction pixels may be deleted too. Lacroix (1988) proposes
a remedy by allowing edge pixels to form relative maxima,
i.e. real edge pixels are permitted to have pixels with higher
responses in their vicinity as long as there are sufficient pixels
in the neighbourhood with lower responses.
Finally, the edge pixels are linked to form a boundary of con-
nected pixels, that may be generalized and vectorized in a
postprocessing stage for storage in, for example, a GIS. To
obtain more reliable results one may examine the operator
responses in a neighbourhood of connected pixels, using con-
text information.