ISPRS Commission III, Vol.34, Part 3A „Photogrammetric Computer Vision“, Graz, 2002
2. EDGE PRESERVING SMOOTHING
Edge preserving smoothing is a pre-processing step which is
often necessary in order to alleviate or even to make possible
the following steps of stereo processing, object recognition etc..
In the past a huge amount of methods for edge preserving
smoothing have been developed (see e.g. (Klette, Zamperoni,
1992)) but here a method is presented which fits the discrete
dynamical network (1).
That method (Jahn, 1999a) which has a certain relation to the
anisotropic diffusion approach (Perona, Shiota, Malik, 1994) is
more general than edge preserving smoothing but here it is
applied only to that special problem. We consider M points P, =
(<xYx) (k=1,...,M). These points are the pixel positions (1,)) in
case of edge preserving smoothing. We now assign to each
point P, the points P, of the Voronoi neighbourhood Nv(Py)
which is the 4- neighbourhood in case of raster image
processing. For simplifying, the notation N(k) instead of N (P)
is used in the following. Furthermore, to each point P, a feature
vector f, is assigned (in case of edge preserving smoothing the
(scalar) features are the grey values gij)-
To derive a feature smoothing algorithm the feature vector f, is
averaged over the neighbourhood N(k):
1
fo tof Ne Q2)
n, +1 k'eN(k)
Here, n, is the number of Voronoi neighbours of point P, (ny =
4 in case of raster image processing).
An equivalent (recursively written) notation of (2) is
1
fi(t+1)=f ()+—— Yh ()-f (1) 0
na +1 REN(k)
(150, 1,2, ..
The initial condition is f,(0) = f, Czy)
Because of its linearity, the recursive algorithm (3) with
increasing recursion level (or discrete time) t diminishes the
resolution of the image and blurs the edges more and more. But
here we do not want to blur edges and to smooth out image
details. Therefore, the feature differences in (3) must be
weighted properly to prevent that. Introducing weights wy,» the
following scheme is obtained:
1
fr (t+1)=1, (1) + —m——- 3 wy (t)- [fic (t)- Fc (t)]
na +1 KEN(k)
(4)
The weight wy, is chosen as a function of the edge strength
between features fi and f.. Averaging of both features is only
possible if the edge strength is weak. To choose the weights the
edge strength is introduced according to
us ole
X Kk k' m | (5)
If — f|
In (5) |f| is the norm of the vector f (|f] — |f| in case of an 1D
feature f), and t, is an (adaptive) threshold.
Now, the weights can be introduced via
A - 176
Wi k > zu) (6)
where s(x) is a non-increasing function with s(0) = 1 and
s(e) = 0. Good results are obtained with the function
(x) — (7)
but other functions are possible too.
The algorithm (4) is of type (1) and thus represents a special
discrete dynamical network. We learn from (4) that in contrast
to commonly used neural networks not the signals f, are
weighted and summed but their differences f, - f. of
neighboured neurons. Furthermore, the non-linearity, here given
by the function s(x) (7), differs from the sigmoid function.
Figures 1 to 3 show the capabilities of the algorithm.
In algorithm (4) the averaging was confined to the (small) 4-
neighbourhood. Therefore, many iterations (typically 20 — 30)
are necessary to obtain sufficient smoothing. To reduce the
number of iterations bigger neighbourhoods can be considered
(Jahn, 1999b).
e noise
Fig. 1. left: simulated image with additiv
(S/N = 1 and S/N = 4, resp.)
center: smoothed image
right: edges in smoothed image