Carsten Garnica
contrast to the conventional edge-preserving filters the amount of disturbances like small edge fragments due to noise in
the homogeneous areas is considerable smaller.
The application of the algorithm also simplifies the parameterisation in the following feature extraction steps. This is
achieved because the magnitude of the edges is preserved, and the magnitude of the gradients in the homogeneous areas
is put almost down to zero, what facilitates the adjustment of thresholds. In case of the use of the Canny edge detection
algorithm, for example, the gradient magnitude threshold can be set to somewhat like 5 gradient units. This threshold
does no longer depend on the strength of the noise in the original image.
4.4 Computation time
Due to the complexity of the calculations, the new algorithm needs more computation time than conventional ones (cf.
table 7). In the presented example a region of size 9*9 pixel has been used for the extended approach. The cost for this
extension, which has been used in the example of table 5 too, is about 6096 compared to the base computation time for
the MHN approach.
Filter Gaussian Kernel SNN MHN new approach
time (s) 10 15 17 27
test image: 512 x 512 , 3 channels , Computer: 233 MHz AMD
Table 7. Comparison of computation times
5 CONCLUSION
Often it is not crucial what filtering algorithm is chosen for image smoothing, but if the results are supposed to be of
superior quality, and if the quality of the input data is problematic, intense attention has to be paid to concept, structure
and impact of the pre-processing algorithm to be applied.
It has been shown that the algorithm newly developed brings together the desired features of different types of filters. It
is a combination of the Maximum Homogeneity Neighbour Filter and segmentation techniques. The algorithm provides
a high degree of smoothing in the homogeneous areas, but preserves all image structures like edges or corners. So the
following feature extraction steps can be applied without the effects of noise.
Possible fields of usage of the proposed approach are in principle all applications where image improvement is
necessary, especially as a pre-processing step for feature extraction or image segmentation. The scale of the images is
irrelevant, so the algorithm can be applied to aerial images as well as to all kinds of close range images. The number of
channels is irrelevant, too.
If the image only contains a very low amount of noise, or if the objects having to be detected are comparatively large or
if the local image contrast is great, the results are just as good as those performed by conventional algorithms.
Because the algorithm needs more computation time than conventional ones, it will primarily be worth the effort, if the
feature extraction promises to be tricky. This may occur if some of the homogeneous areas that have to be extracted are
very small (eg. down to 3*3 Pixels) or if the contrast between adjacent areas in the images is low (eg. less than 10 gray
value units). If the images contain a high amount of noise, the usage must be recommended, because the results are
significantly of superior quality.
6 REFERENCES
Ballard, D.H., Brown, C.M., 1982. Computer Vision. Prentice Hall, Englewood Cliffs, New Jersey, 1982.
Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. PAMI-8, No.6, Nov. 1986.
Forlani, G., Malinverni, E., Nardinocchi, C., 1996. Using perceptual grouping for road recognition. Proc. of 18th ISPRS
Congress, Volume XXXI, Part B3, Commission III, Vienna, 1996.
Haralick, R.M., Shapiro, L.G., 1992. Computer and Robot Vision, Volume I. Addison-Wesley Publishing Company,
1992.
Wang, Y., 1994. Strukturzuordnung zur automatischen Oberflächenrekonstruktion. Wissenschaftliche Arbeiten der
Fachrichtung Vermessungswesen der Universität Hannover, Nr. 207, ISSN 0174-1454.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam 2000. 325