5.6 Edge Orientation Coherence
The basic notion governing edge orientation coherence ap-
proaches is that neighbouring pixels positioned on an edge will
approximately show equal orientation (cf. Gregson, 1993).
Therefore one may examine the gradient directions of the
pixels. This can be done along the following line:
1. Compute the mean and the variance of the directions
of the gradients in a local neighbourhood (e.g. a 3 x 3
window) of which the magnitude is above a predefined
threshold;
2. Decide whether the orientations of all gradients point
sufficiently well into the same direction on basis of the
computed variance and an a priori variance measure
derived from the edge orientation bias introduced by
the detector and a noise estimate, using and F-test.
3. If the computed variance indicates that all orientations
are the same, assign to the central pixel the mean of
the directions of the gradients.
It is possible to refine the above process, by removing the
outliers step by step and by examining whether the remain-
ing orientations point in the same direction and are spatial
connected in such a way that they form likely an edge.
6 Discussion
V The apparently simple problem of locating edges in an
image has proved to be very difficult and is still poorly under-
stood. There probably exists virtually no mathematical ap-
proach or trick that has been remained untouched to tackle
the boundary delineation problem, which is an indication of
its intricacy. Optimal methods based on thorough theoret-
ical considerations reveal to produce poor results on aerial
and satellite images, due to the fact that the underlying as-
sumptions about the data are often violated. In particular
the design of many of the (optimal) edge detection schemes
are based on assumptions, which are unrealistic for images of
non-restricted scenes, including: (1) the image contains only
ideal step edges embedded in zero-mean Gaussian distributed
noise, (2) the image may be described as an analytical func-
tion, (3) the only intensity changes are locally straight step
edges, (4) intensity varies linearly in the direction perpendic-
ular to the edge, (5) edges are broadly spaced, and (6) abrupt
intensity changes in the image correspond to meaningful ob-
ject boundaries in the scene. One of the main reasons for fail-
ing is that local edge detectors can not discriminate among
the many types of features that may be present in the image.
Even in noisy and texture areas, high responses will occur.
V The above weaknesses of edge detection schemes com-
bined with the fact that the boundary delineation problem
is task-domain dependent results in the inevitable conclusion
that the exploration of specific geometric object information
is indispensable to arrive at reliable boundary outlinings. This
conclusion introduces questions like: how to obtain adequate
descriptions of specific geometric constraints?, and how to
match these constraints with the image function?
V. The main reasons why so many edge detection schemes
could emerge, are: (1) the broad variety of mathematical
principles and tricks that can be used to base an edge detector
on, and (2) existing techniques are often not suited for the
particular task the researcher has at hand, forcing to search
for other methods resulting in a new approach.
440
International Archives of Photogrammetry and Remote Sensing. Vol. XXXI, Part B3. Vienna 1996
V/ It is remarkable that the performance of the many local
edge detectors, whether they are based on heuristic grounds
or on rigorous mathematical considerations, does not exhibit
expressive differences. The choice of the type of preprocess-
ing (smoothing) and the type of postprocessing, in particular
context incorporation, reveals to be actually more important
for the final result than the choice of a particular local edge
detector.
REFERENCES
[1] Ballard, D.H., Brown, C.M., 1982, Computer Vision,
Prentice-Hall.
[2] Babaud, J., Witkin, A.P., Baudin, M., Duda, R.O., 1986,
Uniqueness of the Gaussian kernel for scale-space filtering,
PAMI, 8 (1), 26-33.
[3] Bergholm, F., 1987, Edge focusing, PAMI, 9 (6), 726-741.
[4] Besl, P.J., Jain, R.C., 1988, Segmentation through variable-
order surface fitting, PAMI, 10 (2), 167-192.
[5] Brooks, M.J., 1978, Rationalizing edge detectors, CGIP, 8,
277-285.
[6] Canny, J., 1986, A computational approach to edge detec-
tion, PAMI, 8 (6), 679-698.
[7] Cheng, X.S., 1990, Design and implementation of an image
understanding system: DADS, Ph.D. thesis, DUT.
[8] Davies, E.R., 1990, Machine vision, AP.
[9] Davis, L.S., 1975, A survey of edge detection techniques,
CGIP, 4, 248-270.
[10] De Gunst, M.E., Han, C.S.L.A., Lemmens, M.J.P.M., Van
Munster, R.J., 1991, Automatic extraction of roads from
SPOT images, In: OEEPE Official Pub. (27), pp. 131-140.
[11] Dreschler, L., Nagel, H.-H., 1982, Volumetric model and 3D
trajectory of a moving car derived from monocular TV frame
sequences of a street scene, CGIP, 20, 199-228.
[12] Duncan, J.S., Birkhôlzer, T., 1992, Reinforcement of linear
structure using parametrized relaxation labeling, PAMI, 14
(5), 502-515.
[13] Elliott, H., Srinivasan, L., 1981, An application of dynamic
programming to sequential boundary estimation, CGIP, 17,
291-314.
[14] Fleck, M.M., 1992, Some defects in finite-difference edge
finders, PAMI, 14 (3), 337-345.
[15] Forstner, W., 1986, A Feature based correspondence algo-
rithm for image matching, PRS, 26-111, 1-17.
[16] Forstner, W., 1993, Feature extraction in digital photogram-
metry, Photogrammetric Record, 14 (82), 595-611.
[17] Forstner, W., 1994, A framework for low level feature extrac-
tion, in: Lecture Notes in Computer Science, 801, Springer,
383-394.
[18] Fu, K.S., Mui, J.K., 1981, A survey on image segmentation,
PR, 13, 3-16.
[19] Gerbrands, J.J., 1988, Segmentation of noisy images, Ph.D.
thesis, DUT.
[20] Ghosal, S., Mehrotra, R., 1993 Orthogonal moment operators
for subpixel edge detection, PR 26, nr. 2, 295-306.
[21] Gregson, P.H., 1993, Using angular dispersion of gradient
direction for detecting edge ribbons, PAMI, 15 (7), 682-696.
[22] Grin, A., Li, H., 1994, Semi-automatic road extraction by
dynamic programming, PRS, 30-111, 324-332.
[23] Hancock, E.R., Kittler, J., 1990, Edge-labeling using
dictionary-based relaxation, PAMI, 12 (2), 165-181.
[24] Haralick, R.M., 1980, Edge and region analysis for digital
image data, CGIP, 12, 60-73.
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
Ha
of
Ha
tec
Ha
da!
Hi
col
Hu
ed;
Illi.
for
Kl
ce
sti
niz
SC