Ise map,
ion and
icultural
cerned,
obtained
able for
f 1996;
, Scale
change
function
1993);
ng and
algebra
image .
tauffer,
, image
| index
tization
); post-
ey and
etecting
change
; multi-
data 1;
ithm is
| direct
med (if
rom-to"
Post-
cted to
y using
of each
ared on
x. The
from-to
1at the
mplete
al date
change
'e, it Is
| in the
rate as
fication
'orithm
ccurate
velop a
ted the
ed and
| most
ection.
) Q)
MAD),
soData
31 PAR Algorithm
This is a widely used decision rule based on simple Boolean
"and/or" logic. Training data in n spectral bands are used in
performing the classification. Brightness values from each
pixel of the multispectral image are used to produce an n-
Hits ea
H =) with 4 , being the mean value of the training data
dimensional mean vector, Mc =( U 4, U ,, ...,
obtained for class c in band k out of m possible classes. S, is
the standard deviation of the training data class c of band k out
of m possible classes. Using a one-standard deviation threshold,
a parallelepiped algorithm decides BV, is in class c if, and
only if,
Ha Sa s BV, * Ha + Ser (1)
where c 2 1, 2, .., m is number of classes; k 2 1, 2, .., n is
number of bands. Therefore, if the low and high decision
boundaries are defined as
La = Het = Sa (2)
and
Ha = Ha + Sa (3)
the parallelepiped algorithm becomes
L.S BV ei, (4)
lhese decision boundaries form an n-dimensional
parallelepiped in feature space. If the pixel value lies above the
lower threshold and below the high threshold for all n bands
evaluated, it is assigned to that class. When an unknown pixel
does not satisfy any of the Boolean logic criteria, it is assigned
to an unclassified category.
3.2 MID Algorithm
For two bands, k and / , the Euclidean distance is
Dist = (av, -n4) *(BV,-u,) (5)
where ,, and ,,, represent the mean vectors for class c
measured in bands k and /.
It should be obvious that any unknown pixel will definitely be
assigned to one of the training classes using this algorithm.
There will be no unclassified pixels.
When more than two bands are evaluated in a classification, it
is possible to extend the logic of computing the distance
between just two points in m space using the equation
(Schalkoff, 1992)
(6)
2 A
Das = n/a - b)
3.3 MAL Algorithm
The maximum likelihood decision rule assigns each pixel
having pattern measurements or features X to the class c whose
units are most probable or likely to have given rise to feature
vector X (Foody et al., 1992). It assumes that the training data
statistics for each class in each band are normally distributed,
that is, Gaussian (Blaisdell, 1993). In other words, training
data with bi- or trimodal histograms in a single band are not
ideal.
The decision rule applied to the unknown measurement vector
X is (Schalkoff, 1992):
Decide X is in class c if, and only if,
D- => Pi (7)
where i =1, 2, 3, ..., m possible classes, and
P.=- T flos, [det(V, ) * (x - M.y v^(x - M, ) (8)
and det(V ) is the determinant of the covariance matrix V.
Therefore, to classify the measurement vector X of an unknown
pixel into a class, the maximum likelihood decision rule
computes the value p for each class. Then it assigns the pixel
to the class that has the largest (or maximum) value.
This assumes that each class has an equal probability of
occurring in the terrain. However, in most remote sensing
applications, there is a high probability of encountering some
classes more often than others. Thus, we would expect more
pixels to be classified as some class simply because it is more
prevalent in the terrain. It is possible to include this valuable a
priori (prior knowledge) information in the classification
decision. We can do this by weighting each class c by its
appropriate a priori probability, a. The equation then
becomes:
Decide X is in class c if, and only if,
p.(a.)= pa) (9)
where i 2 1, 2, 3, ..., m possible classes, and
P. (a.) - log, (a.) -
DS
{log [det(V, )] + (x — M.) v^(x = M.)}
(10)
This Bayes’s decision rule is identical to the maximum
likelihood decision rule except that it does not assume that
each class has equal probabilities.
3.4 MAD Algorithm
The Mahalanobis Distance classification is a direction
sensitive distance classification algorithm that uses statistics
for each class (Research Systems, Inc., 1996). It is similar to
the maximum likelihood classification algorithm but assumes
all class covariances are equal and therefore is a faster method.
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 7, Budapest, 1998 401