Full text: Proceedings of the Symposium on Global and Environmental Monitoring (Pt. 1)

descriptor and was utilized by Peleg et 
al. /PeNaHa84/ in the context of texture 
classification. Peleg et al. called this 
descriptor the factal signature. They 
demonstrated the power of the method in 
the context of some Brodatz textures. 
The technical problem of measuring the 
fractal dimension of surfaces is quite 
difficult. Recently, Roy et al. showed 
empirically that dramatically different 
dimensions (2.01-2.33) can be achieved by 
applying different algorithms to the same 
data /RoGrGa87/. Most of the methods uti 
lized are based on the so-called variogram 
approach, which can be performed both in 
the spatial and frequence domains (see 
e.g. /Pentla83/). This shows that the 
fractal descriptor indeed has some simila 
rities with the other two methods presented 
above (using Fourier spectra or second 
order statistics). Other methods utilize 
e.g. the number of cubes that are necessary 
to estimate the volume /NaSoTa87/, the 
surface area covered by a blanket /PeNa- 
Ha84/ etc. 
The approach we have used originates from 
the classical problem of measuring the 
length of a coastline. In a texture win 
dow, the length of specific profiles are 
computed at each scale. According to the 
self-similar properties of fractals, 
[N(e)*e D = C] should hold at each scale. 
Here e is the scale, N(e) the number of 
units needed and C is a constant. By ta 
king the logarithm over this equation, 
parameters C and D can be estimated with 
a least-squares procedure. The estima 
tion is done separately in each of the 
four main directions at 10 successive sca 
les. This produces either 4- (fractal 
dimension) or 32 descriptors (in case of 
fractal signatures). In each direction, 
three profiles are used and the final value 
is an average of these three profiles. 
The fractal signatures are always computed 
with the help of three successive scales, 
similarly to the works of /PeNaHa84/ and 
/NaSoTa87/. 
2.4 The amplitude varying rate approach 
Very recently, Zhuang and Dunn reported 
for a new texture measure, which they cal 
led the amplitude varying rate approach 
/ZhuDun90/. In their method the Amplitu 
de Varying Rate Matrix (AVRM) is computed 
through examining the profile of each scan 
line in a fixed direction and recording 
frequencies of distances between pixels 
with the same gray level. From this matrix 
they are able to estimate the sizes of 
the primitives and the periodicity and 
contrast of textures. Zhuang and Dunn 
strongly argued that their method is better 
than the cooccurrence matrix method, be 
cause it can describe some physical inter 
pretations. They also showed empirically 
that their algorithm works better than 
the second order statistics. The result 
can be made questionable, because only 
the Haralick's five most popular features 
where used. 
Because of these promising results we wan 
ted to include this descriptor to the test. 
Again the AVRM-method was utilized in the 
same framework as the first two methods 
reported. 
3. CLASSIFIERS 
Because the usual assumption of multi-nor 
mal probability density functions in the 
context of parametric classifiers does 
not hold in texture classification, the 
Bayesian optimality of such a classifica 
tion system is brutally violated. That is 
why, the maximum likelihood (ML) classi 
fier serves here just as a reference. The 
other two classifiers (k-NN and ALSM) app 
lied are both non-parametric and can better 
adopt themselves to the non-linear decision 
boundaries. 
3.1 The k-NN classifier 
The k-NN classifier (see e.g. /DevKit82/) 
can be regarded as the most important clas 
sifier with respect to practical applica 
tions. It has been proven in /CovHar67/ 
that the (large sample) error rate of the 
k-NN classifier monotonically decreases 
towards the optimal error bound of a 
Bayesian classifier as k goes towards in 
finity. When sample size is finite, this 
is not anymore valid /Devivj80/. 
The proper choice of k is of course a dif 
ficult problem. In principle, one should 
choose k as big as possible, but practical 
problems will occur, because of the fini 
te sample sizes (k does not monotonically 
decrease the classification accuracy). A 
useful guideline given in literature sug 
gests to select k proportional to the squa 
re root of the sample size. 
The k-NN classifier has not been too widely 
used in practical applications, because of 
the storage and computational complexity 
it imposes. However, techniques have been 
presented for competing with traditional 
techniques in this respect. The solution 
is to use two preprocessing techniques, 
namely, Editing and Condensing (see /Dev- 
Kit80/). The idea is to select a small 
subset from the training set such that 
the 1-NN classification with the reduced 
dataset achieves a performance, which is 
close to or better than the performance 
of 1-NN classification with the complete 
set. The editing procedure is based on 
the holdout technique and can be summarized 
as follows /DevKit80/: 
(1) Make a random partition of the 
available training data into N 
subsets (diffusion). 
(2) Classify the samples in subset i 
using the k-NN of subset 
M0D((i+1),N) (classification). 
(3) Discard all the samples that were 
misclassified at step 2 (editing). 
(4) Pool the remaining data to consti 
tute a new data set (confusion). 
336
	        
Waiting...

Note to user

Dear user,

In response to current developments in the web technology used by the Goobi viewer, the software no longer supports your browser.

Please use one of the following browsers to display this page correctly.

Thank you.