341
3 THE AUTOREGRESSIVE
MODEL OF CLASS
FREQUENCES
Several different stochastic models can be used for the
matic map class frequences modeling. However the com
putational complexity is serious limiting factor, therefore
the autoregressive model (7) was chosen.
N
Yt = X] AiYt-\ + A (7)
>=i
Where Y t E t are , A'—dimensional vectors, /1, are KxK ma
trices of unknown model parameters and N is the order of
model. E t is the white noise vector with following proper
ties for t > N :
7 (t — 1) = *y(N) + t — 1 — N
Ul = Yy(t-1) — Ky(t-l)K(t-l)^y(i-l)
Yy(t-1) Ky(t-l)
Vl_i
Ky(t-i) Yz(t-1)
T
H«-.)= E YkYi
k=N+1
E Z t Yl
k=N+l
k (1 -,i= E
k=N+1
V/v is a positive definite matrix and
l(N) > N(l + K) - 2
(18)
(19)
(20)
(21)
(22)
(23)
(24)
£[£<] = 0
E[Ê t ÊJ_ x ] = 0 i J. 0 i < t (8)
4 NUMERICAL
REALIZATION
= 0 0 < i < t
We assume, that probability density of E has multidimen
sional normal distribution independent of previous data
and is the same for every time t.
E\È,ÈJ) = n (9)
fi is a constant covariance A'—dimensional matrix. The
task consist of finding the estimation Y (3) in dépendance
of known process history.
= (10)
To construct estimator (3), we need to derive the condi
tional probability density
(ii)
Using Bayesian estimation theory (Peterka,1981), we
can express (11) in the form of Student’s distribution
PWIU'-») = ir K > 2 T((i(t) -P+K + l)/2)/{r(( 7 (()
-/» + 1)/2)(1 + ZjV^Z,)^ 2 |A,_,|'' 2 [1 + (Y, - Pl lZi f
K-i(Y, -PLz,)H 1 + zfvr f !_ n z,)}"w-s+K + m ]{12)
The predictor (13) can be evaluated using matrix V t
(17) updating and its following inversion. Another pos
sibility is direct updating of P t . According to work (Pe-
terka,1981), to ensure the numerical stability of solution ,
it is advantageous to calculate (15) by the means of the
square-root filter REFIL (Peterka,1981), which guarantees
the positive definiteness of matrix (17). The filter REFIL
updates directly the Cholesky square root of the matrix
vr l •
The numerical complexity of proposed classifier is larger
than the conventional per-point Bayesian one. If we denote
the number of arithmetic operations necessary to classify
one pixel then the Bayesian classifier in its most effi
cient version needs:
h(*) = I<d(d + 3)/2 h{+) = I<(d - 1 )(d + 2)/2 + 2K
The contextual Bayesian classifier is computationally
more demanding:
h(*) = Kd(d + 3)/2 + 3K 2 N + 2K 2 N 2 + SEN + 16A
h{+) = K(d— l)(d+2)/2+K+3K 2 N+1.5K 2 N 2 +'2.5K N+n
with conditional mean value ( 3)
y; = PJ_ x z t
Where n is the number of pixels in thematic map win
dow. To avoid overflow problems the smallest possible sin-
(13) gle class predictor value (13) was chosen to be 0.001.
where P t -1 is estimation (15) of the Kx/3 matrix (14)
P T = [Au-
.., Ayv],
(14)
A-1 = v~i
-l)Ezy(t—1)
(15)
Zt = [Y t T _ 1 ,.
V’T ] T
■ ■ ? U-aJ
(16)
is the flxl data vector (/? = KN). The following notation
was used in (12):
vu = K_! + V N (17)
5 EXPERIMENTAL RESULTS
The contextual classification algorithm was applied to
agricultural type of Thematic Mapper subscene from North
Moravia . The comparison was made by the Bayesian per-
point classifier. The area studied is large cooperative farm
situated in Vizovice Hills. The objective of the study was
to determine its land-use, land-cover conditions. Ground
areas of homogeneous landforms and land cover conditions