International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol XXXV, Part B4. Istanbul 2004
2. TEXTURE ANALYSIS METHODS
In this chapter we will briefly describe the four methods used
for texture analysis and feature extraction: (1) Statistical
methods based on the grey level coocurrence matrix, (2) energy
filters and edgeness factor, (3) Gabor filters, and (4) wavelet
transform based methods.
2.1 Grey level coocurrence matrix (GLCM)
The elements of this matrix, p(ij), represent the relative
frequency by which two pixels with grey levels "i" and "j", that
are at a distance *d" in a given direction, are in the image or
neighbourhood. It is a symmetrical matrix, and its elements are
expressed by
3. P(i, (1)
pli,j) = és i)
S PG D
i=0 j=0
where Ng represents the total number of grey levels. Using this
matrix, Haralick (1973) proposed several statistical features
representing texture properties, like contrast, uniformity, mean,
variance, inertia moments, etc. Some of those features were
calculated, selected and used in this study.
2.2 Energy filters and edgeness
The energy filters (Laws, 1985) were designed to enhance some
textural properties of the images. This method is based on the
application of convolutions to the original image, /, using
different filters gj, g5...,gw , therefore obtaining N new images
J, 1 * g, (n 7 LN). Then, the energy in the neighbourhood
of each pixel is calculated. In order to reduce the error due to
the border effect between different textures, a post-processing
method proposed by Hsiao y Sawchuk (1989) was used. This
method is based on the calculation, for each pixel of the filtered
image J, | of the mean and variance of the four square
neighbourhoods in which each pixel is a corner, and assigning
as the final value for that pixel the mean of the neighbourhood
with the lowest variance, which is supposed to be more
homogeneous and, consequently, should contain only one type
of texture (no borders).
The edgeness factor is a feature that represents the density of
edges present in a neighbourhood. Thus, the gradient of an
image / is computed as a function of the distance “d” between
neighbour pixels, using the expression:
gli, judy = SAG N= 1G +d. I+ 1G) 1G=d. D+ (2)
(DEN
e|, j) - 16, j * )| | IG) - 1G. j - d)
where g(i,j,d) represents the edgeness per unit area surrounding
a generic pixel (i,j) (Sutton and Hall, 1972).
2.3 Gabor filters
These filters are based on multichannel filtering, which
emulates some characteristics of the human visual system. The
human visual system decomposes an image formed in the retina
into several filtered images, each of them having variations in
intensity within a limited range of frequencies and orientations
(Jain and Farrokhnia, 1991). A Gabor filters bank is composed
of a set of Gaussian filters that cover the frequency domain with
different radial frequencies and orientations. In the spatial
domain, a Gabor filter A(x.y) is a Gaussian function modulated
by a sinusoidal function:
Ges |
1 ;
h(x, y) = -exp[- Ai -exp( j2zF(x cos 0 + ysen0))
2
27167 20
where o, determines the spatial coverage of the filter. In the
frequency domain, the Gabor function is a Gaussian curve
(Bodnarova et al., 2002). The Fourier transform of the Gabor
function is:
^
H(u,v) - expE2x^ o, ((u — Fcos0)? +(v-Fsen0})] (4)
The parameters that define each of the filters are:
|. The radial frequency (F) where the filter is centered
in the frequency domain.
2. The standard deviation (6) of the Gaussian curve.
3. The orientation (9.
For the purpose of simplicity, we assume that the Gaussian
curve is symmetrical. The filter bank was created with 6
orientations (0°, 30°, 60°, 90°, 120° and 150°) and 3
combinations of frequency and standard deviation: F=0.3536
and o =2.865, F=0.1768 and oc =5.73, F-0.0884 and c
=11.444. This operation produced a total of 18 filters covering
the map of frequencies. Once the filters were applied and their
magnitude computed, the image was convolved by a Gaussian
filter (c —5) to reduce the variance.
2.4 Wavelet transform
The use of wavelet transform was first proposed for texture
analysis by Mallat (1989). This transform provides a robust
methodology for texture analysis in different scales. The
wavelet transform allows for the decomposition of a signal
using a series of elemental functions called wavelets and
scaling, which are created by scalings and translations of a base
function, known as the mother wavelet:
seW ue”N
1
Wu (X) = "|
X—U
S
where *s" governs the scaling and "4" the translation. The
wavelet decomposition of a function is obtained by applying
each of the elemental functions or wavelets to the original
function:
(6)
. A 1 wl: Xm
Wf(s,u)= | fo) v D “Jas
i is ig
In practice, wavelets are applied as high-pass filters, while
scalings are equal to low-pass filters. As a result of this, the
wavelet transform decomposes the original image into à series
of images with different scales, called trends and fluctuations.
The former are averaged versions of the original image, and the
latter contain the high frequencies at different scales or levels
1110
Int
——
Sin
low
text
fluc
obt:
call
Reg
way
extr
from
feat
as tl
(Fer
wav
(19¢
COOL
We
COOC
inve
In a
of te
Gho
appr
deci:
to kr
ofm
The
wind
influ
of t
inde
resul
The
evalu
fores
first ¢
then
Final
obtair
Versu
where
31 I
Image
three
Spain
—- ms mE