papers to each other, but contradictory
results have been achieved. This certainly
comes from the difficulty involved in their
usage and parameter tuning. Both methods,
as such, produce high dimensional feature
vectors and the usual approach is to compu
te some ad hoc features from the original
descriptors. This reduces the original
information and makes the comparison dif
ficult. We have tried to avoid this prob
lem by careful parameter tuning and by
utilizing standard feature extraction met
hods (/DevKit82/) which do not reduce dras
tically the amount of information but only
the dimensionality of the feature vectors.
In addition to these two popular texture
descriptors, we have included in the compa
rison three other texture measures. First
ly, a simple first order statistic in the
form of local variance, serves as a kind
of reference. Secondly, the appealing
fractal based descriptors, the fractal
dimension and the fractal signature, are
included. Thirdly, a new method, called
the amplitude varying rate statistical
approach after Zhuang and Dunn /ZhuDun90/,
is included in the comparison, because of
the most promising results achieved in
/ZhuDun90/.
When comparing the performance of texture
descriptors in the context of classifica
tion, attention has to be paid, not only
to the descriptors, but also to the clas
sifier itself. It has to be chosen to
properly work with the features chosen.
The usual brute-force application of an
"optimal" maximum likelihood classifier
assuming multi-normal probability densi
ties, has been, for the writers' opinion,
distorting many comparative studies. Espe
cially, when using textural descriptors,
the decision boundaries can be highly non
linear. In these instances, a non-para-
metric classifiers would be the only rea
sonable choice. A simple, but computa
tionally heavy, k-NN classifier, has been
proven to have a large sample size error
rate that decreases monotonically to the
optimal Bayesian error rate /CovHar67/.
Its computational complexity can also be
thoroughly improved by the so called edi
ting and condensing techniques (see Chap
ter 3). Because a k-NN classifier can pro
duce highly non-linear decision boundaries,
it is extensively compared with the ML-
classifier in the present paper. The non
linearity problem is widely addressed in
Artificial neural network classifiers.
Such an adaptive classifier is the Average
Learning Subspace Method (ALSM) developed
by Oja in /Oja83/. Because of its reported
suitability to texture classification,
especially in the context of cooccurrence
statistics and power spectral methods, it
is the third classifier adopted in this
context.
In Chapter 2 we will review the texture
descriptors, their technical implementa
tion, and the feature extraction methods
utilized in this project. Chapter 3 is
concentrated on the description of the
classifiers used, and Chapter 4 gives a
summary of the results. Finally Chapter
5 draws some conclusions.
2. TEXTURE DESCRIPTORS
Haralick defines texture as consisting of
two basic dimensions /Harali79/. The first
one consists of the image texture elements
itself, and the second one of the spatial
dependencies between these elements. This
spatial organization may be random or may
have dependencies between its primitives.
This dependence may be structural, proba
bilistic, or functional. Texture can be
described with such words as fine, coarse,
smooth, granular, regular, irregular, ran
dom, or structural.
Even today there is no exact mathematical
definition of texture and we still rely
on those loose descriptions. A large part
of the texture analysis techniques are in
fact ad hoc and many statistical approaches
to the measurement and characterization
of image texture exist. In statistical
methods, the pixels are supposed to have
spatial distribution having some statis
tical characteristics and the analysis
techniques try to determine corresponding
parameters. The statistical characteris
tics, which one measures, make the dif
ference between the methods. For a good
survey see e.g. /Harali79/, /Harali86/, or
/GoDeOo85/.
In the underlying comparison we have chosen
four texture descriptors which are reported
to own good discriminative characteristics,
namely the second order cooccurrence sta
tistics /HaShDi73/, the 2D power spectrum
/Bajcsy73/, the fractal descriptors /Pent-
la83/, and the amplitude varying rate app
roach /ZhuDun90/. These methods, the prob
lems involved and their technical implemen
tation will be addressed in Chapters 2.1-
2.4.
2.1 Second order (cooccurrence) statistics
There has been psychovisual evidence that
two textures with identical second order
statistics are not separable from each
other /Julesz62/. Later it was pointed
out by Gagalowicz and Tournier-Lasserve,
that for non-homogeneous textures this
does not hold /GagTou86/. Gagalowicz and
Tournier-Lasserve also claim that natural
textures are usually inhomogeneous. How
ever, in practice it seems to be a good
approximation for texture distinguishabili-
ty.
The cooccurrence matrix (often referred as
the Gray Tone Spatial Dependence Matrix)
is an estimate of the second order joint
conditional probability density function,
and is defined by /DyHoRo80/ as follows:
Cooccurrence matrix is a G*G matrix, where
each entry (i,j) is the number of times
gray levels i and j occur at separation d
in the picture, which has been quantized
to G levels.
The high dimensionality of the cooccurrence
matrix produces the first problem. In 8
bit images, a straightforward application
334