dimensionality was at least 256 (16*16).
Still the discriminatory power of the
transformed space was better or the same
than by the five features of Haralick.
The same applies for the power spectrum
method compared to the descriptors of /Das-
Jer84/. This is only true for the k-NN
and ALSM classifiers, but not for the ML-
classifier, showing again the problems
involved in the ML-classif ication. In
that case the performance of the classifier
distorted the results so, that correct
conclusion could not be made.
| Descriptor
!
Classifier
ALSM k-NN
ML
| Variance
73/78
77/78
65/68
| Cooccurence
97/98
98/98
76/75
| Fourier Spectrum
93/98
94/96
75/77
| Fractal-dimension
76/89
75/90
69/80
| Fractal-signature
76/88
76/90
69/74
1 AVHR
94/96
91/95
77/78
Table 1. Summary of the results in case of 1:15000
aerial imagery. Precentage of correct
classifications, without/with spectral
features.
| Descriptor
1
| Classifier
| ALSM k-NN
ML
| Variance
| 59/82
59/82
1
55/61
| Cooccurence
| 86/97
87/98
69/82
1 Fourier Spectrum
| 74/94
74/93
65/77
j Fractal-dimension
| 71/94
71/95
69/81
j Fractal-signature
j 70/95
71/94
63/74
1 AVHR
1 80/93
81/93
70/74
Table 2. Summary of the results in case of the SPOT
image 1 (rural). Precentage of correct
classifications, without/with spectral
features.
| Descriptor
Classifier
ALSM k-NN
ML
| Variance
62/81
62/82
59/69
j Cooccurence
87/96
87/97
70/86
| Fourier Spectrum
71/92
73/92
65/82
| Fractal-dimension
71/94
71/95
69/81
j Fractal-signature
70/95
70/94
63/80
| AVHR
80/93
81/93
70/74
Table 3. Summary of the results in case of the SPOT
image 2 (urban). Precentage of correct
classifications, without/with spectral
features.
Comparison of the descriptors
Tables 1-3 summarize all the results. As
can be seen no clear distinction can be
made, but usually the cooccurrence statis
tics yield the best results, achieving a
very low error rate of 2-4%. As could be
predicted, the larger scale imagery favors
the more complex descriptors. At smaller
scale images, also the very simple fractal
descriptor produces good results (error
rate of 7%) and it clearly competes the
other simple descriptor, namely the varian
ce .
Against the expectations, the fractal sig
nature does not bring more information as
compared to the fractal dimension. This
might be caused by the relatively small
window size, together with the method of
estimating the fractal signature.
Contradictionary to the conclusion drawn
in /ZhuDun90/, the AVHR seems to own a
little bit higher error rate than the cooc
currence statistics.
In the SPOT images, it seems clear, that
the textural descriptors alone cannot bring
satisfactory results.
Comparison of the classifiers
As could be expected the ML-classifier
behaves worst. No separation between the
performance of the k-NN and ALSM-classi-
fiers can be seen.
5. CONCLUDING REMARKS
The results indicated very clearly that
the choice of a classifier is utmost impor
tant, when texture classification is per
formed. Both non-parametric classifiers
used (k-NN and ALSM) can be highly recom
mended in this context. The usage of a
ML-classifiers should be avoided.
For larger scale imagery some more complex
measures are asked for, but in case of
smaller case images, the simple descriptors
based on computed fractal dimension in four
main directions of a local window seem to
work nicely and are computationally light.
In the case of satellite images, the spect
ral channels should be combined to the
texture descriptors before reasonable
results can be expected.
The reason for these relatively optimistic
results (error rates in the order of 5%)
comes partly from the test data. Only
ideal windows were used. In practical
applications, the boarder areas of the tex
ture areas cause some troubles and this
test should be carried out also by using
such indistinct areas.
Although the result show promising out,
one should not forget, that the methods
applied, are all rather heuristic in natu
re. The best way for texture analysis
should be to model the whole sampling pro
cess, e.g. with the help of stochastic 2D
processes. We hope that in the future
the algorithms and hardware implementa
tions are powerful enough to utilize these
more complete and more formal models.
6. REFERENCES
/Bajcsy73/ Bajcsy, R.:
Computer Description of Textured
Surfaces. 3rd International Joint
Conference on Artificial Intelli
gence, 1973, Stanford, pp. 572-579.
/ConHar80/ Conners, R.W., Harlow, C.A.:
A Theoretical Comparision of Textu
re Algorithms. IEEE Transactions
on Pattern Analysis and Machine
Intelligence, Vol 2, No 3, 1980,
pp. 204-222.
/CovHar67/ Cover, T.M., Hart, P.E.:
Nearest Neighbour Pattern Classifi
cation. IEEE Transactions of In-
338